<stwn at unsoed.ac.id> Objectives • Learn case study 2, intelligent surveillance • Learn the application model, architecture, and scenarios of case study 2 • Learn how to simulate case study 2 * The case study is based on Gupta et al. [1]
<stwn at unsoed.ac.id> Surveillance: Challenges • Distributed cameras surveilling an area – Public safety and security, manufacturing, transportation, health care • Manual monitoring of video streams is not practical • Automatic analysis of data coming from cameras Zao et al., 2014
<stwn at unsoed.ac.id> Surveillance: Requirements • Low-latency communication – Real-time tuning of pan-tilt-zoom (PTZ) parameters for multiple cameras (coverage) – Communications between cameras and the set of camera control strategies • Handling big volume of data – Continuous transmission of captured video frames for processing – Causing huge traffic, consider all cameras in the system – Network load and congestion • Heavy long-term processing – Camera control needs to be updated constantly: learning optimal PTZ param. calculation – Requires analysis of the decisions taken by the control strategy over a long period of time Distributed approach is desirable
<stwn at unsoed.ac.id> Intelligent Surveillance • Coordinating multiple cameras, each has different field of view (FOV) to surveil an area • Coordinated tuning of PTZ parameters for the best view of the area • Alerting in the case of unusual events
<stwn at unsoed.ac.id> Events of interests Smart Camera • Detects motion in its FOV and starts sending a video stream to the Intelligent Surveillance app. • The application locates the moving object in the video stream and initiates tracking • Constantly tunes the PTZ parameters of the cameras to get the best view of all tracked objects
<stwn at unsoed.ac.id> Modules (1) • Motion detector: detects motion of an object – Continously reads raw video streams – Embedded inside smart cameras – Forwards video stream to object detector, if an event detected • Object detector: extracts the moving object – Receives video streams from motion detector in smart cameras – Compares with previously discovered objects in the area – Tracking is activated for new object. Calculates the coordinates
<stwn at unsoed.ac.id> Modules (2) • Object tracker: receives the last calculated coordinates – Calculates optimal PTZ config. of all cameras covering the area – The PTZ info. is sent to the PTZ control of cameras periodically • User interface: presents a user interface – Sends a part of the video streams to the user's device – Requires filtered video streams from the object detector tracked objects * PTZ control: Adjusts smart camera to adapt to the optimal PTZ parameters sent by the object tracker
<stwn at unsoed.ac.id> • Cloud-only – Traditional cloud-based implementation – All application modules run in DCs – Sensors transmit data to the cloud, actuators are informed if action is needed • Edge-ward – Deployment of application modules close to the edge of network – Starts from the lowest fog devices towards the cloud – Placing modules near the network edge and the cloud AppModule Placement Strategies Metrics: latency, network use, energy consumption Workloads Placement Strategies iFogSim Metrics
<stwn at unsoed.ac.id> Simulation Variables […] public class DCNSFog { static List<FogDevice> fogDevices = new ArrayList<FogDevice>(); static List<Sensor> sensors = new ArrayList<Sensor>(); static List<Actuator> actuators = new ArrayList<Actuator>(); static int numOfAreas = 1; static int numOfCamerasPerArea = 4; private static boolean CLOUD = false; [...] * See DCNSFog.java in org.fog.test.perfeval
<stwn at unsoed.ac.id> Evaluation • Comparing two placement strategies: latency, network usage, energy consumption • Each image sensor or “camera” is embedded in a smart camera • Smart cameras gain access to Internet via area gateways connected to ISP gateway • Constant number of smart cameras (4), and varying the number of area gateways – Config 1: 1 area gateway – Config 2: 2 area gateways – Config 3: 4 area gateways – Config 4: 8 area gateways – Config 5: 16 area gateways Gupta et al., 2017 * Simulation period: 1000 seconds
<stwn at unsoed.ac.id> Average Latency • Cloud-only – Placing application modules in the cloud DC – Increasing latency in processing • Edge-ward – Maintaining low latency – Placing modules on fog nodes near the edge Average latency of control loop
<stwn at unsoed.ac.id> Network Usage • Cloud-only – Network load increases as the number of devices connected to the application • Edge-ward – Object detector and tracker are placed on fog – Communications use low-latency links – Reducing data volume sent to the cloud
<stwn at unsoed.ac.id> Energy Consumption • Energy consumed by different classes of devices • Smart cameras have high energy consumption due to motion detection module • Using fog devices reduces energy consumption in the cloud DC
<stwn at unsoed.ac.id> References [1] H. Gupta, A. Vahid Dastjerdi, S.K. Ghosh, and R. Buyya, “iFogSim: A toolkit for modeling and simulation of resource management techniques in the internet of things, edge and fog computing environments,” Software: Practice and Experience, vol. 47, no. 9, pp. 1275-1296, 2017.