Computing and Compressive Sensing in
Wireless Sensor Networks
Zhenzhi Qian, Chu Wang
Department of Electronic Engineering
Shanghai Jiao Tong University, China
1
Outline
Introduction
Definition & Model
Computing in Two-Node Network
Function Computation Rate
Compressive Sensing : A Basic Application
Future Work
2
Introduction
Wireless Sensor Networks
Task of sensing the environment
Task of communicating function values to the sink
node
Function Types
Type-sensitive
e.g. Network of temperature sensors
Type-threshold
e.g. Alarm network
Aggregate functions under end-to-end flow
Energy-constrain
Memory-constrain
Bandwidth-constrain
3
Introduction
An alternative solution
In-network computation
Perform operations on received data
A series of Fundament al Issues in In-Network
Computation
How best to perform distributed computation
What is the optimal strategy to compute
Challenges in WSNs Data Gathering
Global communication cost reduction
Energy consumption load balancing
4
Outline
Introduction
Definition & Model
Computing in Two-Node Network
Function Computation Rate
Compressive Sensing: A Basic Application
Future Work
5
Definition & Model
X {xi | i 1, 2,…} is the set of the measurement data
f ( x, y ) is the function used for computation
ij is the Euclidean distance between sensor i and sensor j
r is the sensor’s transmission range
Collocated Network(figure.(a))
The network with ij r ,for all i, j .
Random Planar Network
The n nodes and the sink
node is i.i.d distributed,
and r ( n) is chosen to
ensure connectivity by
multi-hop communication
6
Outline
Introduction
Definition & Model
Computing in Two-Node Network
Function Computation Rate
Compressive Sensing: A Basic Application
Conclusion & Future Work
7
Computing in Two-Node Network
The two connecteed processors can exchange bits one at a time
over the link
When A and B both know the function value f ( x, y ) ,the
communication terminates
This problem is minimizing computation time given a throughput
constrained link between processors and an input split between
processors
xi X
A
f ( xi , y j ) Z
yi Y
B
8
Computing in Two-Node Network
A general protocol functionality
Decide which node to transmit
Input : previously transmitted nodes
Decide the value of the bit to be transmitted
Input : input value + previous transmission
A naïve protocol
The communication complexity of function f
log | X | log | Z | slots
Optimization ? Lower bound ?
x
A
() is
f ( x, y )
B
9
Computing in Two-Node Network
The Lower bound of communication complexity:
log | Range( f ) |
Any two distinct function values must correspond to different
sequences of transmitted bits
f ( x1 , y1 ) f ( x2 , y2 )
f ( x1 , y1 ) f ( x1 , y2 )
A
10
Computing in Two-Node Network
Protocol : Matrix Representation
A/B
1
2
3
4
1
0
0
0
0
2
0
0
0
1
3
0
0
0
1
4
1
1
0
1
11
Computing in Two-Node Network
There are several ways to derive the lower bound of the
number of the partitions required
Rank-based: log Rank (C )
fooling set -based: log m
Prove : 1) ONE ROUND
RANK
2) similar to the above
(AT MOST) ½
12
Outline
Introduction
Definition & Model
Computing in Two-Node Network
Function Computation Rate
Compressive Sensing : A Basic Application
Conclusion & Future Work
13
Function Computing Rate
Scenario of Sensor network Computation
A tree rooted at the collector node
14
Function Computing Rate
Function types and corresponding results:[kumar]
Histogram: statistic of node measurements
Computational rate:
1
O(
)
log n
15
Function Computing Rate
Function types and corresponding results:
Type-sensitive:
A symmetric function f () is defined as type-sensitive if
exists some r (0,1) and integer N, such that for all
n N and any j n rn , there are two subsets of
{ y j 1 , y j 2 ,..., yn } , {z j 1 , z j 2 ,..., zn } satisfy that:
f ( x1 ,..., x j , z j 1 , z j 2 ,..., zn ) f ( x1 ,..., x j , y j 1 , y j 2 ,..., yn )
Computing
A easy noterate:
:
1
O( )
InAny
a collocated
input of thenetwork:
sensor network
n changes, the sensitive
Examples
of type-sensitive
function
value changes
due to the localfunctions:
small difference.
Computing
rate:
1
Average,median,majority,histogram
O
(
)
In a random planar multihop network:
log n
16
Function Computing Rate
A example
help
understand
threshold
Function to
types
and
corresponding
results:function
Node : Tall,wealthy,handsome
Type-threshold:
A symmetric function f () is defined as type-threshold if
exists a nonnegative -vector , called the threshold
vector , so that
f ( x) f ' ( ( x )) f ' (min( ( x), ))
for all x
A easy note:
Suppose a protocol of advancing the threshold:
Node: the opposite
When a given sensor measurement is above the threshold,
Theotherwise
function : white
wealth
pulchritude
it is considered by the computation,
it can
be
safely ignored
Thresholds
n
17
Function Computing Rate
Examples of type-threshold functions:
Maximum, minimum, k-th largest value
Computing rate in collocated network:
1
O(
)
log n
Computing rate in random planar multi-hop network:
1
O(
)
log log n
18
Outline
Introduction
Definition & Model
Computing in Two-Node Network
Function Computation Rate
Compressive Sensing : A Basic Application
Conclusion & Future Work
19
Compressive sensing
Introduction of compressive sensing[2009 mobicom]
Baseline data collection
Compressive data gathering
20
Compressive sensing
Analysis :
The sink obtains M weighted sums { yi }, i 1, 2,..., M
Where ij represents the i-th sum round’s corresponding
j-th sensor nodes coefficient. This coefficient is random a
value. But it can be achieved at the sink node by
preserving the series of pseudo random numbers of each
sensor .Meaning the matrix
is saved beforehand.
N total nodes and M rounds of gathering exists.
21
Compressive sensing
Data recovery
Find a particular domain
,and sensor readings
d [d1 , d 2 ,..., d N ]T is a K-sparse signal in it , thus
x [ x1 , x2 ,..., xn ] are the coefficients, which given as:
N
d xi i
i 1
d x
The domain is chosen by yourself. Usually the DCT and
wavelet is preferred.
The compressive sampling theory have:M should satisfy
M c 2 (, ) K log N
(, )= N max i , j
1i , j N
if the K-sparse signal is resconstructable.
22
Compressive sensing
Data recovery
We thus summary the conditions for now:
M sums at the sink node with efficient amount for
reconstruction by the restraints given previously.
y x where and
are known to us.
As is given in the compressive sensing theory, the
problem is converted to a l1-norm minimization version :
Find min x
satisfying y x
y
xR N
l1
N
xi
where x l1
i 1
This can be solved by a linear programming tech[13].
And finally using d x we can obtain the sensor
readings.
23
Outline
Introduction
Definition & Model
Computing in Two-Node Network
Function Computation Rate
Compressive Sensing: A Basic Application
Future Work
Future Work
Consider mobility in the WSNs
Gossip algorithm
Coding strategy:LDPC
Thank you!
26
© Copyright 2026 Paperzz