1 / 24

Efficient Detection of Split Personalities in Malware

Efficient Detection of Split Personalities in Malware. Davide Balzarotti, Marco Cova, Christoph Karlberger, Christopher Kruegel, Engin Kirda and Giovanni Vigna NDSS 2011 Feb. OUTLINE. Introduction and Related Work Our Approach Implementation Evaluation Conclusion. Introduction.

muncel
Download Presentation

Efficient Detection of Split Personalities in Malware

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Efficient Detection of Split Personalities in Malware Davide Balzarotti, Marco Cova, Christoph Karlberger, Christopher Kruegel, Engin Kirda and Giovanni Vigna NDSS 2011 Feb.

  2. OUTLINE • Introduction and Related Work • Our Approach • Implementation • Evaluation • Conclusion

  3. Introduction • Malware detection • Static • Dynamic • Sandboxes(Anubis, CWSandbox, Joebox, Norman Sandbox) • Counterattack • Attacks on Virtual Machine Emulators • CPU semantics, timing attacks • Environment attacks • Processes, drivers, or registry values

  4. Solu1:Transparent malware analysis • Cobra • Code blocks • Replace instruction with a safe version • Ether • Hardware virtualization More difficult to detect by malicious code. Great, but slow.

  5. Solu2:Detect different behaves • “Emulating Emulation-Resistant Malware”, 2009 • Reference system vs. emulated environment • Compare execution path • Use Ether to produce the reference trace • But executing the same program twice can lead to different execution runs.

  6. OUTLINE • Introduction and Related Work • Our Approach • Implementation • Evaluation • Conclusion

  7. Our approach • Recording and Replaying • Reference system vs. emulated environment • system call trace: types and arguments • If there is a different behavior • Rerun it in a transparent framework(Ether) • Detect malware reliably and efficiently

  8. Reliability • Two systems are execution-equivalence if all program that • Start from the same initial state • Same inputs on both systems => Same runtime behavior => Same sequence of the system calls? • Assume no race condition

  9. Reliability(cont.) • If our reference system and the analysis system are execution-equivalence, any difference in the observed behavior => split-personality • Also, this discrepancy is the result of CPU semantics or timing attacks

  10. Making Systems Execution-Equivalence • Same OS environment • Same address space layout of a process at load time • Same inputs to a program • Run program on the reference system in log mode • Run program on the analysis system in replay mode • System call matching

  11. ReplayProblem • A number of system calls are not safe to replay • Allocating memory, spawning threads • Only replay for those system calls that read data from the environment • other system calls are passed directly to the underlying OS • Delay cause additional system calls • WaitForSingleObject()

  12. System Call Matching System Calls 3 4 1 2 reference analysis 1 3 2 5 Buf_skipped Buf_extra

  13. OUTLINE • Introduction and Related Work • Our Approach • Implementation • Evaluation • Conclusion

  14. Implementation • A kernel driver • Trap all the system calls • Hook “System Service Descriptor Table” • Each system call, two handler, log and replay • A user-space application • Start and control the driver • Start the process that has to be analyzed • Store the data generated during the logging phase

  15. Practical aspects • Handles consistency • Live handles and replayed handles • Check a list of all replayed handles • Networking • NtDeviceIOControlFile() • Device-dependent parameters

  16. Practical aspects(cont.) • Deferred results • STATUS_PENDING • NtWaitForSingleObject() • Thread Management • NtCreateThread() • Each thread has a new log

  17. Limitations • Memory Mapped Files • DLLs • Create file with memory-mapped • Remove the system calls • Multiple processes • Random numbers • KsecDD • Inter-process communication and asynchronous calls • Postponing check

  18. OUTLINE • Introduction and Related Work • Our Approach • Implementation • Evaluation • Conclusion

  19. Evaluation • Microsoft Windows XP Service Pack 3 • VMware virtual machine • Anubis system(Qemu) • 1. Log and Replay six programs(success) • 2. SDBot(fail) • spawning new process, like NtCreateProcess • six different versions to detect VMware(success) • Red Pill, Scoopy, VMDetect, and SourPill

  20. Evaluation(cont.) • 3. Real Malware with no VM-checks

  21. Evaluation(cont.) • 4. Real malware with VM-checks

  22. Performance • Depends on the type of operation • Average 1% overhead • Compresses a 1KB-long random file CMD: 7za.exe a test.zip 1KB_rand_file • Anubis: 4.267 sec • Ether: 77.325 sec • Our Vmware reference system: 1.640 sec

  23. OUTLINE • Introduction and Related Work • Our Approach • Implementation • Evaluation • Conclusion

  24. Conclusion • A prototype • Recording system calls and replay them • Need a fully transparent, analysis system for further examination

More Related