Escolar Documentos
Profissional Documentos
Cultura Documentos
37
Figure 1: Architecture of the strain monitoring sensor network.
Drivers were written for our strain board, and at the ap-
plication level we developed a traditional sense and send
system that acquires, stores to flash and transmits network
information (time, RSSI, beacon interval, sequence number,
neighbours), strain and temperature/humidity data at five
minute intervals (92 bytes of payload total). Storing samples
to flash allows a full dataset after construction is completed,
regardless of the network condition during the deployment.
This is important for post-deployment analysis of data for
techniques such as edge mining [1].
Micro-benchmarking was used to estimate the lifetime of
the sensor nodes given a particular battery capacity and
Figure 2: Sensor node hardware.
sampling rate. The individual operations of a sensing cycle
(5 minute interval) were measured and aggregated to de-
termine a baseline for node lifetime. 270 days of operation
2. SYSTEM OVERVIEW are expected with two 7.8 Ah 1.5 V C cells.
Figure 1 shows the an overview of the end–to–end sys-
tem architecture. Data flows from the sensor nodes to a 2.2 Gateway
sink, where it is transmitted to a remote server via 3G and
Our Gateway was built using a Raspberry Pi model A+
made available to user applications. To reduce risk and de-
combined with a TelosB node and a USB 3G modem (both
velopment time, we opted to use off-the-shelf hardware and
with external antennas). The Gateway is deployed inside
software wherever possible.
an IP65 mild steel enclosure and mounted on a pole. Due
to deployment constraints, we chose to power the Gateway
2.1 Sensor node using a 12 V 100 Ah battery. During normal operation WSN
Our sensor node combines the Zolertia Z1 platform with data is collected by the TelosB, aggregated at the Raspberry
a custom strain gauge board (see Figure 2). The Z1 is based Pi and transmitted hourly via 3G to a remote server.
around an MSP430 CPU and a CC2420 radio. Our custom From prior experience with deploying WSNs, we focused
board provides input for one resistive strain gauge (strut on ways to i) lower power consumption, ii) improve fault
loading is assumed to be axial), whose readings are acquired tolerance to minimise on-site maintenance, and iii) to make
using a Wheatstone bridge combined with a low power 16- on-site maintenance as easy as possible. To reduce power,
bit ADC (TI ADS1115). Measurement resolution is <1 μe we disabled HDMI on the Raspberry Pi and made custom
with a measurement range of ±2500 μe. External tem- circuitry to allow the power to the TelosB and 3G modem
perature/humidity is enabled by a Sensirion SHT15. Each to be controlled through GPIOs. To provide fault tolerance
sensor node is packaged in an IP65 aluminium enclosure between the TelosB and the Raspberry Pi, we implemented
(115×65.5×50 mm, 370 g) with holes for an external radio a simple handshake connection so that the Raspberry Pi
antenna (4.4 dBi Antenova Titanis), a gland to support the can periodically check and restart the TelosB if necessary.
strain gauge wiring, and a waterproof breathable membrane For 3G transmissions, if an hourly update fails, it is rolled
for the SHT15. Magnetic feet on the enclosures allow the into the following hour’s transmissions, and data is always
nodes to be placed over the strain gauges. archived locally in case of extended 3G outages. In order to
Our software was developed on top of the Contiki WSN make on-site debugging and deployment easier, the Gateway
OS. Contiki provides a network stack with a low power MAC hosts a USB WI-FI dongle. When the dongle is connected,
(ContikiMAC) and multi-hop tree formation/data collection the Gateway becomes a wireless access point (using hostap),
protocol (Contiki Collect) that we took almost entirely out- hosting a web page that shows real-time updates of data.
of-the box. The only minor change necessary was to reduce Our measures reduced the average power requirement of
the size of the recent message buffer kept by each node; this the gateway from over 400 mA to 128 mA (at 12V). Based
reduced the amount of time needed to wait in order to verify on microbenchmarking tests, we estimate a 100 Ah battery
network formation after a node restart. will provide one month’s operation.
38
100
3. CASE STUDY: MRT EXCAVATION SITE
From February to July 2015, we deployed our system on
the TERS of an active MRT excavation site in Singapore. 75
The main goals of our deployment were to: i) validate the
3.1 Deployment 1: Two Node Deployment 3.2 Deployment 2: Four Node Deployment
Two sensor nodes were deployed on a level 2 strut at the Two sensor nodes were initially deployed on a level 1 strut
excavation, 10 metres below ground level. The gateway was and two more on a level 2 strut as it was installed, several
deployed at ground level near the excavation, 12 metres away weeks later. This demonstrated the growing of the network
from the nodes. The strain gauges were welded to the strut as the construction evolved. The gateway was situated at
next to the contractor’s vibrating wire system, as shown in ground level, 10 m away from the level 1 strut.
Figure 3, while the strut had zero load on it. This enabled The level 1 deployment has been in place for 82 days, with
us to have reference measurements to compare against. De- level 2 in place for 47 days; Table 1 shows the results for both
ployment 1 lasted for 70 days, during which time both nodes levels. The PDR ranges from 86% to 94%, and expected the
sent data, but one node’s data stream began to drift away nodes on level 1 transmit directly to the sink over 95% of
from realistic values when compared against reference con- the time. The nodes on the second level still manage to get
tractor data. We suspect this was due to water ingress from directly to the sink between 72% to 80% of the time. The
a storm event, but it was not possible to retrieve either of network can be considered stable with the maximum beacon
the nodes to debug for safety reasons, since the construction interval being reported 97% of the time.
had progressed significantly by this point. After this period
the Gateway was moved to another part of the construction 3.3 Deployment experiences
site for deployment 2; the nodes continue to log data and The deployments allowed us to test the integrity of the
will be collected at the end of the construction. network and the data collected, and better understand how
Figure 4 shows the daily PDR for the deployment, with a network could be scaled up alongside continuing construc-
an average PDR of 83.9%. For 50 days, the PDR was 98% tion. We found that the Gateway lifetime was in line with
on average. The drop in PDR for the other 20 days can be our predictions, and the battery is being replaced every 30
39
Figure 3: Example deployment on a strut next to vibrating wire gauges
40