Difference between revisions of "Vision & Roadmap"

Line 6: Line 6:
 
|[[File:Logo_iotbench.png|355px|right]]
 
|[[File:Logo_iotbench.png|355px|right]]
 
|}
 
|}
However, the lack of a standardized methodology to evaluate protocol performance often leads to a high divergence across experimental setups, which makes it impossible to compare results obtained by different authors [https://home.deib.polimi.it/mottola/papers/boano18bench.pdf (more info)]. As a consequence, there is an increasing need to rigorously benchmark low-power wireless systems under the exact same settings.
+
However, the lack of a standardized methodology to evaluate protocol performance often leads to a high divergence across experimental setups, which makes it impossible to compare results obtained by different authors [https://home.deib.polimi.it/mottola/papers/boano18bench.pdf (more info)]. As a consequence, there is an increasing need to rigorously benchmark low-power wireless systems under the exact same settings.  
To address this issue, we have started in 2016 the [https://iot.ieee.org/newsletter/march-2017/ewsn-dependability-competition-experiences-and-lessons-learned EWSN Dependability Competition Series] as a first attempt to rigorously benchmark the performance of low-power wireless systems in harsh RF environments. With this long-term goal in mind, we have created D-Cube, a benchmarking infrastructure that allows to accurately measure key dependability metrics such as end-to-end delay, reliability, and power consumption, as well as to graphically visualize their evolution in real-time.  
+
To address this issue, we have started in 2016 the [https://iot.ieee.org/newsletter/march-2017/ewsn-dependability-competition-experiences-and-lessons-learned EWSN Dependability Competition Series] as a first attempt to rigorously benchmark the performance of low-power wireless systems in harsh RF environments. To support the competition, we have created D-Cube, a benchmarking infrastructure that allows to accurately measure key dependability metrics such as end-to-end delay, reliability, and power consumption, as well as to graphically visualize their evolution in real-time.  
 +
 
 
Across the years, D-Cube’s hardware, software, and back-end has been [[http://www.carloalbertoboano.com/documents/schuss18benchmark.pdf|upgraded and enriched]] with features enabling an automatic benchmarking of protocol performance in order to support the following editions of the EWSN dependability competition. These features include binary patching to decouple the traffic pattern and node identities from the firmware under test, as well as the ability to generate reproducible Wi-Fi interference [[testbed capabilities|(more info)]].
 
Across the years, D-Cube’s hardware, software, and back-end has been [[http://www.carloalbertoboano.com/documents/schuss18benchmark.pdf|upgraded and enriched]] with features enabling an automatic benchmarking of protocol performance in order to support the following editions of the EWSN dependability competition. These features include binary patching to decouple the traffic pattern and node identities from the firmware under test, as well as the ability to generate reproducible Wi-Fi interference [[testbed capabilities|(more info)]].

Revision as of 23:48, 9 November 2019


D-Cube is part of the IoTBench initiative gathering academics and industrial practitioners from the low-power wireless networking community towards a better evaluation and comparison of the performance of low-power wireless communication protocols.

As an increasing number of IoT systems imposing strict dependability requirements on network performance is being developed and commercialized, the demand for dependable communication protocols delivering information in a reliable, efficient, and timely manner is raising. In response to this need, many low-power wireless protocols have been proposed by both industry and academia over the last decade.

Empty.png
Logo iotbench.png

However, the lack of a standardized methodology to evaluate protocol performance often leads to a high divergence across experimental setups, which makes it impossible to compare results obtained by different authors (more info). As a consequence, there is an increasing need to rigorously benchmark low-power wireless systems under the exact same settings. To address this issue, we have started in 2016 the EWSN Dependability Competition Series as a first attempt to rigorously benchmark the performance of low-power wireless systems in harsh RF environments. To support the competition, we have created D-Cube, a benchmarking infrastructure that allows to accurately measure key dependability metrics such as end-to-end delay, reliability, and power consumption, as well as to graphically visualize their evolution in real-time.

Across the years, D-Cube’s hardware, software, and back-end has been [and enriched] with features enabling an automatic benchmarking of protocol performance in order to support the following editions of the EWSN dependability competition. These features include binary patching to decouple the traffic pattern and node identities from the firmware under test, as well as the ability to generate reproducible Wi-Fi interference (more info).