Cybersecurity: It’s All About The Assumptions Mahalingam Ramkumar Dept. of Computer Science and Engineering Mississippi State University Protecting Digital Assets Active vs Passive Why is passive better? Why are things the way they are? What needs to be done? Protecting Assets Active Passive Protecting Digital Assets (Cybersecurity) Active Passive Active vs Passive Active approaches: elimination/removal, isolation of Undesired functionality Undesired traffic Passive approaches: Check if “everything is OK” Tools: Cryptography, System Models Active: Based on attacker model Passive: Based on system model Security in a Nutshell Assumptions Strategy Desired Assurances Desired assurances: Do not depend on the approach Security Protocol: Strategy to convert reliable assumptions to desired assurances Assumptions: The guard is trustworthy; the team that developed the IDS is trustworthy, the hash function is preimage resistant, etc Strategy: Is the strategy correct? Integrity of execution of the strategy? Also boils down to assumptions... Good Assumptions Good assumptions should be universally verifiable The enemy knows the system (Shannon) The system is completely open (Kerckhoff’s Principle) Complexity is the enemy of Security Assumptions behind active approaches can not meet such requirements Passive approaches can be based on good assumptions Desired Assurances and Efficacy Unlimited freedom for attacks: What are the specific goals? All that active approaches can eliminate is some attacks (even if integrity of execution is assured). Passive approaches see any system as a set of data items, with unambiguous rules for reading/modifying data-items. If the agency for execution (for checking enforcement of rules) is trustworthy, desired assurances can be realized. Ideally both approaches should be used: active approaches to prevent as many attacks as possible; passive approaches to detect effects of those attacks that had slipped past active approaches. Toolbox For Passive Approaches Cryptography System Models Cryptography: Utility: ability to amplify trust Trust in integrity/privacy of a small number of bits can be amplified to provide integrity/privacy assurances to any number of bits. Integrity: hash function based protocols Privacy: encryption/decryption based protocols Cryptographic Protocols Constructed by suitably chaining cryptographic primitives (block ciphers and hash functions) Secrecy protocols (for bulk and random access) constructed by chaining block ciphers Integrity protocols (for bulk and random access) constructed by chaining hash functions For most purposes the primitives are interchangeable Cryptography Assumptions Small number of bits stored in a trusted boundary Correctness of cryptographic protocols (which have read/write access to protected bits) executed within a trusted boundary Standards Trust in the integrity of cryptographic primitives stems from meticulous standardization efforts Such a meticulous standard is also required for the environment for execution of cryptographic protocols Without such a standard we simply can not make use of the full power of cryptography. System Models Example: Clark-Wilson system integrity model Data items whose integrity needs to be assured: constrained data items (CDIs) Only well regulated transformation procedures (TPs) can modify CDIs Execution of TPs due to external triggers: unconstrained data items (UDIs) Evolution of the system can be seen as a sequence of snapshots of CDIs Assumptions for Model Based Approaches Correctness of model Integrity of execution environment for TPs Integrity of environment where snapshots and code representing TPs are stored Joint Standard Execution Environment For both cryptographic protocols and TPs Snapshots of CDIs and TP code can be protected using cryptographic protocols All we need is for the standard environment to also execute TPs Even for complex systems TPs can be simple Complex TPs can also be broken down into multiple simpler TPs Trusted Tasks Depending on trigger fetch appropriate TP (verified against protected bits) Each TP will modify one or a small number of CDIs (fetch them and verify against protected bits) If any CDI is private decrypt using protected bits Execute TP, re-encrypt private CDIs Modify CDIs, and hence protected bits Standard environment requires only simple functionality Beneficial Side Effects Passive approaches reduce motivation for attacks If passive methods are also used, assumptions behind active approaches need not be universally verifiable. It is enough if the deployer is convinced of its utility The Bias Towards Active Approaches Active approaches are more challenging (hard problems); so more appealing to researchers Tools used in active approaches are valuable for students (sought after skills in the industry) Promoting Passive Approaches Rigorous standard for execution environment is a prerequisite for reliable passive approaches Different skill sets for practitioners of active vs passive approaches Attacker model (active approaches) vs system model (passive) Practical deployment issues vs design considerations Summary Passive approaches are backed by better assumptions Passive approaches can substantially strengthen active approaches Currently passive approaches are virtually nonexistent (research, teaching, and practice) Requires completely different skill sets compared to active A standard operating environment could go a long way (for making full use of passive approaches)
© Copyright 2026 Paperzz