Difference between revisions of "Overview of Available Suites"

 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 
<br />
 
<br />
D-Cube currently supports three benchmark suites. Each suite is specified by: (i) a given hardware platform, (ii) a given application scenario, and (iii) a set of performance metrics. We describe next the benchmark suites on a high level:
+
D-Cube currently supports four benchmark suites. Each suite is specified by: (i) a given hardware platform, (ii) a given application scenario, and (iii) a set of performance metrics. We describe next the benchmark suites on a high level:
 
<ol>
 
<ol>
 
<li><strong>SkyDC_1 (Tmote Sky Data Collection v1).</strong> This benchmark resembles the first category of the [http://ewsn2019.thss.tsinghua.edu.cn/competition-scenario.html EWSN 2019 dependability competition].  
 
<li><strong>SkyDC_1 (Tmote Sky Data Collection v1).</strong> This benchmark resembles the first category of the [http://ewsn2019.thss.tsinghua.edu.cn/competition-scenario.html EWSN 2019 dependability competition].  
Line 13: Line 13:
 
<ul>
 
<ul>
 
<li><i>Hardware platform</i>: Tmote Sky.</li>
 
<li><i>Hardware platform</i>: Tmote Sky.</li>
<li><i>Application scenario</i>: In this benchmark, a fixed number of source nodes needs to disseminate actuation commands of different length to a specific set of destination nodes over a multi-hop network. This benchmark hence focuses on point-to-multipoint traffic. In particular, each source node is associated with a specific set of destinations (at most eight), which will be injected as an input parameter into the firmware under test. A destination can receive messages from only a single source node, cannot act as a source at the same time, and allows out-of-order delivery.</li>
+
<li><i>Application scenario</i>: In this benchmark, a fixed number of source nodes (at most eight) needs to disseminate actuation commands of different length to a specific set of destination nodes over a multi-hop network. This benchmark hence focuses on point-to-multipoint traffic. In particular, each source node is associated with a specific set of destinations (at most eight), which will be injected as an input parameter into the firmware under test. A destination can receive messages from only a single source node, cannot act as a source at the same time, and allows out-of-order delivery.</li>
 
<li><i>Performance metrics</i>: The measured performance metrics are: (i) the reliability of transmissions, i.e., the number of messages correctly reported to each intended destination, (ii) the average end-to-end latency in communicating each message to its intended destination(s), and (iii) the average energy consumption on all nodes in the network. Note that, on each run, during the first 60 seconds no data is generated and the energy consumption is not measured, so to allow the firmware under test to bootstrap the network.  
 
<li><i>Performance metrics</i>: The measured performance metrics are: (i) the reliability of transmissions, i.e., the number of messages correctly reported to each intended destination, (ii) the average end-to-end latency in communicating each message to its intended destination(s), and (iii) the average energy consumption on all nodes in the network. Note that, on each run, during the first 60 seconds no data is generated and the energy consumption is not measured, so to allow the firmware under test to bootstrap the network.  
 
</ul>
 
</ul>
Line 21: Line 21:
 
<ul>
 
<ul>
 
<li><i>Hardware platform</i>: nRF52840.</li>
 
<li><i>Hardware platform</i>: nRF52840.</li>
<li><i>Application scenario</i>: This benchmark focuses on multipoint-to-point traffic: up to 48 source nodes generate raw sensor values of different lengths, which should be communicated to the same destination. The latter may be located several hops away from a given source node, even when making use of the coded PHY layers available on the nRF52. The messages containing the raw sensor values should be forwarded to the intended destination as efficiently as possible within a maximum per-message delay bound ∂, i.e., the end-to-end delay of every message from its generation to its reception at the sink should be lower than ∂ (which is fixed and applies to all messages exchanged during a test run). If a message has been received with an end-to-end delay greater than ∂, it is considered to be lost.</li>
+
<li><i>Application scenario</i>: This benchmark focuses on multipoint-to-point traffic: up to 48 source nodes generate raw sensor values of different lengths, which should be communicated to the same destination. The latter may be located several hops away from a given source node, even when making use of the coded PHY layers available on the nRF52. The messages containing the raw sensor values should be forwarded to the intended destination as efficiently as possible within a maximum per-message delay bound ∂, i.e., the end-to-end delay of every message from its generation to its reception at the destination should be lower than ∂ (which is fixed and applies to all messages exchanged during a test run). If a message has been received with an end-to-end delay greater than ∂, it is considered to be lost.</li>
 
<li><i>Performance metrics</i>: The measured performance metrics are: (i) the reliability of transmissions, i.e., the number of messages correctly reported to each intended destination, and (ii) the average energy consumption on all nodes in the network. Note that, on each run, during the first 60 seconds no data is generated and the energy consumption is not measured, so to allow the firmware under test to bootstrap the network.  
 
<li><i>Performance metrics</i>: The measured performance metrics are: (i) the reliability of transmissions, i.e., the number of messages correctly reported to each intended destination, and (ii) the average energy consumption on all nodes in the network. Note that, on each run, during the first 60 seconds no data is generated and the energy consumption is not measured, so to allow the firmware under test to bootstrap the network.  
 
</ul>
 
</ul>
Line 29: Line 29:
 
<ul>
 
<ul>
 
<li><i>Hardware platform</i>: nRF52840.</li>
 
<li><i>Hardware platform</i>: nRF52840.</li>
<li><i>Application scenario</i>: In this benchmark, a fixed number of source nodes needs to disseminate actuation commands of different length to a specific set of destination nodes over a multi-hop network. This benchmark hence focuses on point-to-multipoint traffic. In particular, each source node is associated with a specific set of destinations, which will be injected as an input parameter into the firmware under test. A destination can receive messages from only a single source node, cannot act as a source at the same time, and allows out-of-order delivery. The messages should be disseminated to all intended destination as efficiently as possible within a maximum per-message delay bound ∂, i.e., the end-to-end delay of every message from its generation to its reception at the sink should be lower than ∂ (which is fixed and applies to all messages exchanged during a test run). If a message has been received with an end-to-end delay greater than ∂, it is considered to be lost.</li>
+
<li><i>Application scenario</i>: In this benchmark, a single source nodes needs to disseminate actuation commands of different length to a specific set of up to 48 destination nodes over a multi-hop network. This benchmark hence focuses on point-to-multipoint traffic. The identity of the source node and its associated destinations nodes be injected as an input parameter into the firmware under test. A destination cannot act as a source at the same time, but allows out-of-order delivery. The messages should be disseminated to all intended destinations as efficiently as possible within a maximum per-message delay bound ∂, i.e., the end-to-end delay of every message from its generation to its reception at the destination should be lower than ∂ (which is fixed and applies to all messages exchanged during a test run). If a message has been received with an end-to-end delay greater than ∂, it is considered to be lost.</li>
 
<li><i>Performance metrics</i>: The measured performance metrics are: (i) the reliability of transmissions, i.e., the number of messages correctly reported to each intended destination, and (ii) the average energy consumption on all nodes in the network. Note that, on each run, during the first 60 seconds no data is generated and the energy consumption is not measured, so to allow the firmware under test to bootstrap the network.  
 
<li><i>Performance metrics</i>: The measured performance metrics are: (i) the reliability of transmissions, i.e., the number of messages correctly reported to each intended destination, and (ii) the average energy consumption on all nodes in the network. Note that, on each run, during the first 60 seconds no data is generated and the energy consumption is not measured, so to allow the firmware under test to bootstrap the network.  
 
</ul>
 
</ul>
 
The results of this benchmark are available [https://iti-testbed.tugraz.at/leaderboard/benchmarksuite/4 here]. An example of the pre-defined config struct for binary patching for this benchmark suite is available [https://iti-testbed.tugraz.at/wiki/images/d/db/NRFDC_1.zip here].</li><br/>
 
The results of this benchmark are available [https://iti-testbed.tugraz.at/leaderboard/benchmarksuite/4 here]. An example of the pre-defined config struct for binary patching for this benchmark suite is available [https://iti-testbed.tugraz.at/wiki/images/d/db/NRFDC_1.zip here].</li><br/>
 
</ol>
 
</ol>

Latest revision as of 17:00, 27 February 2020


D-Cube currently supports four benchmark suites. Each suite is specified by: (i) a given hardware platform, (ii) a given application scenario, and (iii) a set of performance metrics. We describe next the benchmark suites on a high level:

  1. SkyDC_1 (Tmote Sky Data Collection v1). This benchmark resembles the first category of the EWSN 2019 dependability competition.
    • Hardware platform: Tmote Sky.
    • Application scenario: In this benchmark, a fixed number of source nodes (at most eight) communicate to a single destination node over a multi-hop network. This benchmark hence focuses on multipoint-to-point traffic. In particular, each destination node collects sensor data of different length transmitted by the different source nodes and allows out-of-order delivery.
    • Performance metrics: The measured performance metrics are: (i) the reliability of transmissions, i.e., the number of messages correctly reported to each intended destination, (ii) the average end-to-end latency in communicating each message to its intended destination(s), and (iii) the average energy consumption on all nodes in the network. Note that, on each run, during the first 60 seconds no data is generated and the energy consumption is not measured, so to allow the firmware under test to bootstrap the network.
    The results of this benchmark are available here. An example of the pre-defined config struct for binary patching for this benchmark suite is available here.

  2. SkyDD_1 (Tmote Sky Data Dissemination v1). This benchmark resembles the second category of the EWSN 2019 dependability competition.
    • Hardware platform: Tmote Sky.
    • Application scenario: In this benchmark, a fixed number of source nodes (at most eight) needs to disseminate actuation commands of different length to a specific set of destination nodes over a multi-hop network. This benchmark hence focuses on point-to-multipoint traffic. In particular, each source node is associated with a specific set of destinations (at most eight), which will be injected as an input parameter into the firmware under test. A destination can receive messages from only a single source node, cannot act as a source at the same time, and allows out-of-order delivery.
    • Performance metrics: The measured performance metrics are: (i) the reliability of transmissions, i.e., the number of messages correctly reported to each intended destination, (ii) the average end-to-end latency in communicating each message to its intended destination(s), and (iii) the average energy consumption on all nodes in the network. Note that, on each run, during the first 60 seconds no data is generated and the energy consumption is not measured, so to allow the firmware under test to bootstrap the network.
    The results of this benchmark are available here. An example of the pre-defined config struct for binary patching for this benchmark suite is available here.

  3. nRFDC_1 (nRF52840 Timely Data Collection v1). This benchmark runs on the nRF52840 platform and resembles a data collection with bounded delays within a multi-hop network.
    • Hardware platform: nRF52840.
    • Application scenario: This benchmark focuses on multipoint-to-point traffic: up to 48 source nodes generate raw sensor values of different lengths, which should be communicated to the same destination. The latter may be located several hops away from a given source node, even when making use of the coded PHY layers available on the nRF52. The messages containing the raw sensor values should be forwarded to the intended destination as efficiently as possible within a maximum per-message delay bound ∂, i.e., the end-to-end delay of every message from its generation to its reception at the destination should be lower than ∂ (which is fixed and applies to all messages exchanged during a test run). If a message has been received with an end-to-end delay greater than ∂, it is considered to be lost.
    • Performance metrics: The measured performance metrics are: (i) the reliability of transmissions, i.e., the number of messages correctly reported to each intended destination, and (ii) the average energy consumption on all nodes in the network. Note that, on each run, during the first 60 seconds no data is generated and the energy consumption is not measured, so to allow the firmware under test to bootstrap the network.
    The results of this benchmark are available here. An example of the pre-defined config struct for binary patching for this benchmark suite is available here.

  4. nRFDD_1 (nRF52840 Timely Data Dissemination v1). This benchmark runs on the nRF52840 platform and resembles a data dissemination with bounded delays within a multi-hop network.
    • Hardware platform: nRF52840.
    • Application scenario: In this benchmark, a single source nodes needs to disseminate actuation commands of different length to a specific set of up to 48 destination nodes over a multi-hop network. This benchmark hence focuses on point-to-multipoint traffic. The identity of the source node and its associated destinations nodes be injected as an input parameter into the firmware under test. A destination cannot act as a source at the same time, but allows out-of-order delivery. The messages should be disseminated to all intended destinations as efficiently as possible within a maximum per-message delay bound ∂, i.e., the end-to-end delay of every message from its generation to its reception at the destination should be lower than ∂ (which is fixed and applies to all messages exchanged during a test run). If a message has been received with an end-to-end delay greater than ∂, it is considered to be lost.
    • Performance metrics: The measured performance metrics are: (i) the reliability of transmissions, i.e., the number of messages correctly reported to each intended destination, and (ii) the average energy consumption on all nodes in the network. Note that, on each run, during the first 60 seconds no data is generated and the energy consumption is not measured, so to allow the firmware under test to bootstrap the network.
    The results of this benchmark are available here. An example of the pre-defined config struct for binary patching for this benchmark suite is available here.