diff --git a/ContentViewerModules/BinEd_Binary_Viewer/README.md b/ContentViewerModules/BinEd_Binary_Viewer/README.md new file mode 100644 index 0000000..e4043cc --- /dev/null +++ b/ContentViewerModules/BinEd_Binary_Viewer/README.md @@ -0,0 +1,6 @@ +- __Description:__ Alternative binary/hexadecimal data content viewer and file viewer/editor plugin. +- __Author:__ ExBin Project +- __Minimum Autopsy version:__ 4.20.0 +- __Module Location__: https://bined.exbin.org/autopsy-plugin/ +- __Source Code:__ https://github.com/exbin/bined-autopsy-plugin +- __License:__ Apache V2.0 License diff --git a/ContentViewerModules/Event_Log_viewer/Event_Log_Viewer.nbm b/ContentViewerModules/Event_Log_viewer/Event_Log_Viewer.nbm new file mode 100644 index 0000000..70e4ca0 Binary files /dev/null and b/ContentViewerModules/Event_Log_viewer/Event_Log_Viewer.nbm differ diff --git a/ContentViewerModules/Event_Log_viewer/README.md b/ContentViewerModules/Event_Log_viewer/README.md new file mode 100644 index 0000000..21059d9 --- /dev/null +++ b/ContentViewerModules/Event_Log_viewer/README.md @@ -0,0 +1,5 @@ +- __Description:__ A module package containing a Data Content Viewer. Allows the user to view individual Event Log (EVTX) files from a windows system. +- __Author:__ Mark McKinnon +- __Minimum Autopsy version:__ 4.18.0 +- __Source Code:__ https://github.com/markmckinnon/Autopsy-NBM-Plugins/tree/main/AutopsyEventLogViewer +- __License:__ Apache V2.0 License diff --git a/ContentViewerModules/Kafka_Viewer/KafkaLogForensic.nbm b/ContentViewerModules/Kafka_Viewer/KafkaLogForensic.nbm new file mode 100644 index 0000000..3f94323 Binary files /dev/null and b/ContentViewerModules/Kafka_Viewer/KafkaLogForensic.nbm differ diff --git a/ContentViewerModules/Kafka_Viewer/README.md b/ContentViewerModules/Kafka_Viewer/README.md new file mode 100644 index 0000000..030ef86 --- /dev/null +++ b/ContentViewerModules/Kafka_Viewer/README.md @@ -0,0 +1,5 @@ +- __Description:__ Kafka Log Forensic is a Data Content Viewer for the big data streaming software Apache Kafka. It allows the user to view records stored cluster-side in Apache Kafka log files. +- __Author:__ Tom Wayne +- __Minimum Autopsy version:__ 4.18.0 +- __Source Code:__ https://github.com/tomwayne1984/autopsy_kafka_forensics/tree/main/source +- __License:__ GNU GPL v3 diff --git a/ContentViewerModules/LNK_File_Viewer/README.md b/ContentViewerModules/LNK_File_Viewer/README.md new file mode 100644 index 0000000..e18fc64 --- /dev/null +++ b/ContentViewerModules/LNK_File_Viewer/README.md @@ -0,0 +1,5 @@ +- __Description:__ A module package containing a Data Content Viewer. Allows the user to view individual Link (*.lnk) files from a windows system. +- __Author:__ Mark McKinnon +- __Minimum Autopsy version:__ 4.16.0 +- __Source Code:__ https://github.com/markmckinnon/Autopsy-NBM-Plugins/tree/main/LNK_File_Viewer +- __License:__ Apache V2.0 License diff --git a/ContentViewerModules/LNK_File_Viewer/lnk_file_viewer.nbm b/ContentViewerModules/LNK_File_Viewer/lnk_file_viewer.nbm new file mode 100644 index 0000000..950c63e Binary files /dev/null and b/ContentViewerModules/LNK_File_Viewer/lnk_file_viewer.nbm differ diff --git a/ContentViewerModules/PolySwarm/README.md b/ContentViewerModules/PolySwarm/README.md new file mode 100644 index 0000000..d53ab27 --- /dev/null +++ b/ContentViewerModules/PolySwarm/README.md @@ -0,0 +1,6 @@ +- __Description:__ Perform hash lookups and file scans on PolySwarm via right click menu on files. +- __Author:__ PolySwarm Developers +- __Minimum Autopsy version:__ 4.8.0 +- __Current Source Code and Releases:__ https://github.com/polyswarm/autopsy-module/releases +- __Original Source Code:__ https://github.com/polyswarm/autopsy-module +- __License:__ MIT diff --git a/ContentViewerModules/Video_Triage/README.md b/ContentViewerModules/Video_Triage/README.md index 86100e7..bc2a1b3 100644 --- a/ContentViewerModules/Video_Triage/README.md +++ b/ContentViewerModules/Video_Triage/README.md @@ -1,5 +1,5 @@ - __Description:__ Analyzes video files and displays a series of images so that you can get a basic idea of what the video contains without viewing the entire thing. - __Author:__ Basis Technology - __Minimum Autopsy version:__ 3.0.7 -- __Module Location__: http://www.basistech.com/digital-forensics/autopsy/video-triage/ +- __Module Location__: https://www.autopsy.com/add-on-modules/video-triage/ - __License:__ Closed source diff --git a/ContentViewerModules/Windows_Prefetch_Viewer/Prefetch_File_Viewer.nbm b/ContentViewerModules/Windows_Prefetch_Viewer/Prefetch_File_Viewer.nbm new file mode 100644 index 0000000..451b090 Binary files /dev/null and b/ContentViewerModules/Windows_Prefetch_Viewer/Prefetch_File_Viewer.nbm differ diff --git a/ContentViewerModules/Windows_Prefetch_Viewer/README.md b/ContentViewerModules/Windows_Prefetch_Viewer/README.md new file mode 100644 index 0000000..844d802 --- /dev/null +++ b/ContentViewerModules/Windows_Prefetch_Viewer/README.md @@ -0,0 +1,5 @@ +- __Description:__ A module package containing a Data Content Viewer. Allows the user to view individual Prefetch (*.pf) files from a windows system. +- __Author:__ Mark McKinnon +- __Minimum Autopsy version:__ 4.18.0 +- __Source Code:__ https://github.com/markmckinnon/Autopsy-NBM-Plugins/tree/main/Prefetch_File_Viewer +- __License:__ Apache V2.0 License diff --git a/IngestModules/Antivirus_scanner/README.md b/IngestModules/Antivirus_scanner/README.md new file mode 100644 index 0000000..eb0a7eb --- /dev/null +++ b/IngestModules/Antivirus_scanner/README.md @@ -0,0 +1,7 @@ +- __Description:__ Module for malware scanning using ClamAV antivirus. +- __Author:__ Askar Dyussekeyev +- __Minimum Autopsy version:__ 4.19.3 +- __Module Location__: https://github.com/dyussekeyev/ClamPsy/releases +- __Website:__ https://github.com/dyussekeyev/ClamPsy/blob/main/README.md +- __Source Code:__ https://github.com/dyussekeyev/ClamPsy +- __License:__ MIT License diff --git a/IngestModules/Bitcoin_Detection/README.md b/IngestModules/Bitcoin_Detection/README.md new file mode 100644 index 0000000..23f301c --- /dev/null +++ b/IngestModules/Bitcoin_Detection/README.md @@ -0,0 +1,6 @@ +- __Description:__ Module can detect the traces of Electrum, Ledger Live app, bitaddress.org and Ledger Nano X connection (USB; Bluetooth) at Windows 10 systems +- __Author:__ dgo-berlin (https://github.com/dgo-berlin) +- __Minimum Autopsy version:__ 4.19.2 +- __Module Location__: https://github.com/dgo-berlin/bitcoin_usage_detection_autopsy_plugin/blob/master/BitcoinDetection/build/org-bitcoin-detection.nbm +- __Website:__ https://github.com/dgo-berlin/bitcoin_usage_detection_autopsy_plugin/ +- __Source Code:__ https://github.com/dgo-berlin/bitcoin_usage_detection_autopsy_plugin/tree/master/BitcoinDetection/src diff --git a/IngestModules/CopyMove/README.md b/IngestModules/CopyMove/README.md index 8480563..e7c9efc 100644 --- a/IngestModules/CopyMove/README.md +++ b/IngestModules/CopyMove/README.md @@ -1,3 +1,5 @@ +- __Known Issues:__ This module does not work with the latest versions of Autopsy (April 2020 - https://sleuthkit.discourse.group/t/copy-move-module/1026) + - __Description:__ A module package containing a File Ingest Module and its corresponding Data Content Viewer. Allows the user to identify Copy-Move forgeries within images in the datasource. Please read the readme before using the package. - __Author:__ Tobias Maushammer - __Minimum Autopsy version:__ 4.1.0 diff --git a/IngestModules/MacOSX_Account_Parser/.gitignore b/IngestModules/MacOSX_Account_Parser/.gitignore new file mode 100644 index 0000000..53a888e --- /dev/null +++ b/IngestModules/MacOSX_Account_Parser/.gitignore @@ -0,0 +1,181 @@ +# Created by .ignore support plugin (hsz.mobi) +### Python template +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*$py.class + +# C extensions +*.so + +# Distribution / packaging +.Python +env/ +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +*.egg-info/ +.installed.cfg +*.egg + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*,cover +.hypothesis/ + +# Translations +*.mo +*.pot + +# Django stuff: +*.log +local_settings.py + +# Flask stuff: +instance/ +.webassets-cache + +# Scrapy stuff: +.scrapy + +# Sphinx documentation +docs/_build/ + +# PyBuilder +target/ + +# IPython Notebook +.ipynb_checkpoints + +# pyenv +.python-version + +# celery beat schedule file +celerybeat-schedule + +# dotenv +.env + +# virtualenv +venv/ +ENV/ + +# Spyder project settings +.spyderproject + +# Rope project settings +.ropeproject +### VirtualEnv template +# Virtualenv +# http://iamzed.com/2009/05/07/a-primer-on-virtualenv/ +.Python +[Bb]in +[Ii]nclude +[Ll]ib +[Ll]ib64 +[Ll]ocal +[Ss]cripts +pyvenv.cfg +.venv +pip-selfcheck.json +### JetBrains template +# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm +# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839 + +# User-specific stuff: +.idea/workspace.xml +.idea/tasks.xml +.idea/dictionaries +.idea/vcs.xml +.idea/jsLibraryMappings.xml + +# Sensitive or high-churn files: +.idea/dataSources.ids +.idea/dataSources.xml +.idea/dataSources.local.xml +.idea/sqlDataSources.xml +.idea/dynamic.xml +.idea/uiDesigner.xml + +# Gradle: +.idea/gradle.xml +.idea/libraries + +# Mongo Explorer plugin: +.idea/mongoSettings.xml + +.idea/ + +## File-based project format: +*.iws + +## Plugin-specific files: + +# IntelliJ +/out/ + +# mpeltonen/sbt-idea plugin +.idea_modules/ + +# JIRA plugin +atlassian-ide-plugin.xml + +# Crashlytics plugin (for Android Studio and IntelliJ) +com_crashlytics_export_strings.xml +crashlytics.properties +crashlytics-build.properties +fabric.properties + +# General +.DS_Store +.AppleDouble +.LSOverride + +# Icon must end with two \r +Icon + + +# Thumbnails +._* + +# Files that might appear in the root of a volume +.DocumentRevisions-V100 +.fseventsd +.Spotlight-V100 +.TemporaryItems +.Trashes +.VolumeIcon.icns +.com.apple.timemachine.donotpresent + +# Directories potentially created on remote AFP share +.AppleDB +.AppleDesktop +Network Trash Folder +Temporary Items +.apdisk + +*$py.class \ No newline at end of file diff --git a/IngestModules/MacOSX_Account_Parser/README.md b/IngestModules/MacOSX_Account_Parser/README.md new file mode 100644 index 0000000..839e03f --- /dev/null +++ b/IngestModules/MacOSX_Account_Parser/README.md @@ -0,0 +1,64 @@ +- __Description:__ Parse OSX 10.8+ account .plist files and extract any available attributes. If a hashed password is available, +extract it and present it in a format that can be used with [Hashcat](https://hashcat.net/). +- __Author:__ Luke Gaddie +- __Minimum Autopsy version:__ 4.0.0 +- __License:__ [MIT](https://opensource.org/licenses/MIT), with the exception of dependencies: + - [biplist](https://pypi.org/project/biplist/) - BSD License (BSD) + +## Installation & Usage +Copy MacOSX_Account_Parser into your Autopsy Python Plugins Folder. + +Run Ingest modules against your data source, making sure to enable to "MacOSX Account Parser" module. + +Any extracted account information will be placed in one of two spots: + +- Extracted Content + - Operating System User Account + - Hashed Credentials + +## Hashcat Usage + +In the event that hashed credentials can be extracted from the user account, they'll be placed in "Extracted Content" -> +"Hashed Credentials". + +Assuming that you place the "Hashcat Entry" value found in an artifact in hashes.txt, a sample hashcat session might look like: + +``` +C:\hashcat> hashcat64.exe -m 7100 ./hashes.txt ./dictionary.txt +hashcat (v5.1.0) starting... + +[...] + +Approaching final keyspace - workload adjusted. + +$ml$68027$fccff02010450ae731c883d638b2a3028bf6504937bab584c283a3a44e8f7ad8$e945d8df4ca67261ff45b07a71e5d695816c53532b42988ae1e91268e869c877ef0186a4b2bdaa75d4b316d03274f5b453ee1c5fef067638041fc696fd091400:TestPassword + +Session..........: hashcat +Status...........: Cracked +Hash.Type........: macOS v10.8+ (PBKDF2-SHA512) +Hash.Target......: $ml$68027$fccff02010450ae731c883d638b2a3028bf650493...091400 +Time.Started.....: Mon Sep 28 18:01:20 2020 (1 sec) +Time.Estimated...: Mon Sep 28 18:01:21 2020 (0 secs) +Guess.Base.......: File (dictionary.txt) +Guess.Queue......: 1/1 (100.00%) +Speed.#2.........: 2 H/s (0.45ms) @ Accel:64 Loops:32 Thr:64 Vec:1 +Speed.#3.........: 0 H/s (0.00ms) @ Accel:64 Loops:32 Thr:64 Vec:1 +Speed.#*.........: 2 H/s +Recovered........: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts +Progress.........: 2/2 (100.00%) +Rejected.........: 0/2 (0.00%) +Restore.Point....: 0/2 (0.00%) +Restore.Sub.#2...: Salt:0 Amplifier:0-1 Iteration:68000-68026 +Restore.Sub.#3...: Salt:0 Amplifier:0-0 Iteration:0-32 +Candidates.#2....: TestPassword -> hashcat +Candidates.#3....: [Copying] +Hardware.Mon.#2..: Temp: 58c Fan: 41% Util: 87% Core:1936MHz Mem:4513MHz Bus:8 +Hardware.Mon.#3..: Temp: 53c Fan: 36% Util: 0% Core:1695MHz Mem:4513MHz Bus:8 + +``` + +## Misc. Information + +* Accounts are stored in /private/var/db/dslocal/nodes/Default/*.plist +* Credentials are hashed as SALTED-SHA512-PBKDF2 (Hashcat -m 7100) +* Hashes are formatted as $ml$[iterations]$[salt]$[first 128 bits of entropy] \ No newline at end of file diff --git a/IngestModules/MacOSX_Account_Parser/biplist/__init__.py b/IngestModules/MacOSX_Account_Parser/biplist/__init__.py new file mode 100644 index 0000000..f9d5836 --- /dev/null +++ b/IngestModules/MacOSX_Account_Parser/biplist/__init__.py @@ -0,0 +1,977 @@ +"""biplist -- a library for reading and writing binary property list files. + +Binary Property List (plist) files provide a faster and smaller serialization +format for property lists on OS X. This is a library for generating binary +plists which can be read by OS X, iOS, or other clients. + +The API models the plistlib API, and will call through to plistlib when +XML serialization or deserialization is required. + +To generate plists with UID values, wrap the values with the Uid object. The +value must be an int. + +To generate plists with NSData/CFData values, wrap the values with the +Data object. The value must be a string. + +Date values can only be datetime.datetime objects. + +The exceptions InvalidPlistException and NotBinaryPlistException may be +thrown to indicate that the data cannot be serialized or deserialized as +a binary plist. + +Plist generation example: + + from biplist import * + from datetime import datetime + plist = {'aKey':'aValue', + '0':1.322, + 'now':datetime.now(), + 'list':[1,2,3], + 'tuple':('a','b','c') + } + try: + writePlist(plist, "example.plist") + except (InvalidPlistException, NotBinaryPlistException), e: + print "Something bad happened:", e + +Plist parsing example: + + from biplist import * + try: + plist = readPlist("example.plist") + print plist + except (InvalidPlistException, NotBinaryPlistException), e: + print "Not a plist:", e +""" + +from collections import namedtuple +import datetime +import io +import math +import plistlib +from struct import pack, unpack, unpack_from +from struct import error as struct_error +import sys +import time + +try: + unicode + unicodeEmpty = r'' +except NameError: + unicode = str + unicodeEmpty = '' +try: + long +except NameError: + long = int +try: + {}.iteritems + iteritems = lambda x: x.iteritems() +except AttributeError: + iteritems = lambda x: x.items() + +__all__ = [ + 'Uid', 'Data', 'readPlist', 'writePlist', 'readPlistFromString', + 'writePlistToString', 'InvalidPlistException', 'NotBinaryPlistException' +] + +# Apple uses Jan 1, 2001 as a base for all plist date/times. +apple_reference_date = datetime.datetime.utcfromtimestamp(978307200) + +class Uid(object): + """Wrapper around integers for representing UID values. This + is used in keyed archiving.""" + integer = 0 + def __init__(self, integer): + self.integer = integer + + def __repr__(self): + return "Uid(%d)" % self.integer + + def __eq__(self, other): + if isinstance(self, Uid) and isinstance(other, Uid): + return self.integer == other.integer + return False + + def __cmp__(self, other): + return self.integer - other.integer + + def __lt__(self, other): + return self.integer < other.integer + + def __hash__(self): + return self.integer + + def __int__(self): + return int(self.integer) + +class Data(bytes): + """Wrapper around bytes to distinguish Data values.""" + +class InvalidPlistException(Exception): + """Raised when the plist is incorrectly formatted.""" + +class NotBinaryPlistException(Exception): + """Raised when a binary plist was expected but not encountered.""" + +def readPlist(pathOrFile): + """Raises NotBinaryPlistException, InvalidPlistException""" + didOpen = False + result = None + if isinstance(pathOrFile, (bytes, unicode)): + pathOrFile = open(pathOrFile, 'rb') + didOpen = True + try: + reader = PlistReader(pathOrFile) + result = reader.parse() + except NotBinaryPlistException as e: + try: + pathOrFile.seek(0) + result = None + if hasattr(plistlib, 'loads'): + contents = None + if isinstance(pathOrFile, (bytes, unicode)): + with open(pathOrFile, 'rb') as f: + contents = f.read() + else: + contents = pathOrFile.read() + result = plistlib.loads(contents) + else: + result = plistlib.readPlist(pathOrFile) + result = wrapDataObject(result, for_binary=True) + except Exception as e: + raise InvalidPlistException(e) + finally: + if didOpen: + pathOrFile.close() + return result + +def wrapDataObject(o, for_binary=False): + if isinstance(o, Data) and not for_binary: + v = sys.version_info + if not (v[0] >= 3 and v[1] >= 4): + o = plistlib.Data(o) + elif isinstance(o, (bytes, plistlib.Data)) and for_binary: + if hasattr(o, 'data'): + o = Data(o.data) + elif isinstance(o, tuple): + o = wrapDataObject(list(o), for_binary) + o = tuple(o) + elif isinstance(o, list): + for i in range(len(o)): + o[i] = wrapDataObject(o[i], for_binary) + elif isinstance(o, dict): + for k in o: + o[k] = wrapDataObject(o[k], for_binary) + return o + +def writePlist(rootObject, pathOrFile, binary=True): + if not binary: + rootObject = wrapDataObject(rootObject, binary) + if hasattr(plistlib, "dump"): + if isinstance(pathOrFile, (bytes, unicode)): + with open(pathOrFile, 'wb') as f: + return plistlib.dump(rootObject, f) + else: + return plistlib.dump(rootObject, pathOrFile) + else: + return plistlib.writePlist(rootObject, pathOrFile) + else: + didOpen = False + if isinstance(pathOrFile, (bytes, unicode)): + pathOrFile = open(pathOrFile, 'wb') + didOpen = True + writer = PlistWriter(pathOrFile) + result = writer.writeRoot(rootObject) + if didOpen: + pathOrFile.close() + return result + +def readPlistFromString(data): + return readPlist(io.BytesIO(data)) + +def writePlistToString(rootObject, binary=True): + if not binary: + rootObject = wrapDataObject(rootObject, binary) + if hasattr(plistlib, "dumps"): + return plistlib.dumps(rootObject) + elif hasattr(plistlib, "writePlistToBytes"): + return plistlib.writePlistToBytes(rootObject) + else: + return plistlib.writePlistToString(rootObject) + else: + ioObject = io.BytesIO() + writer = PlistWriter(ioObject) + writer.writeRoot(rootObject) + return ioObject.getvalue() + +def is_stream_binary_plist(stream): + stream.seek(0) + header = stream.read(7) + if header == b'bplist0': + return True + else: + return False + +PlistTrailer = namedtuple('PlistTrailer', 'offsetSize, objectRefSize, offsetCount, topLevelObjectNumber, offsetTableOffset') +PlistByteCounts = namedtuple('PlistByteCounts', 'nullBytes, boolBytes, intBytes, realBytes, dateBytes, dataBytes, stringBytes, uidBytes, arrayBytes, setBytes, dictBytes') + +class PlistReader(object): + file = None + contents = '' + offsets = None + trailer = None + currentOffset = 0 + # Used to detect recursive object references. + offsetsStack = [] + + def __init__(self, fileOrStream): + """Raises NotBinaryPlistException.""" + self.reset() + self.file = fileOrStream + + def parse(self): + return self.readRoot() + + def reset(self): + self.trailer = None + self.contents = '' + self.offsets = [] + self.currentOffset = 0 + self.offsetsStack = [] + + def readRoot(self): + result = None + self.reset() + # Get the header, make sure it's a valid file. + if not is_stream_binary_plist(self.file): + raise NotBinaryPlistException() + self.file.seek(0) + self.contents = self.file.read() + if len(self.contents) < 32: + raise InvalidPlistException("File is too short.") + trailerContents = self.contents[-32:] + try: + self.trailer = PlistTrailer._make(unpack("!xxxxxxBBQQQ", trailerContents)) + + if pow(2, self.trailer.offsetSize*8) < self.trailer.offsetTableOffset: + raise InvalidPlistException("Offset size insufficient to reference all objects.") + + if pow(2, self.trailer.objectRefSize*8) < self.trailer.offsetCount: + raise InvalidPlistException("Too many offsets to represent in size of object reference representation.") + + offset_size = self.trailer.offsetSize * self.trailer.offsetCount + offset = self.trailer.offsetTableOffset + + if offset + offset_size > pow(2, 64): + raise InvalidPlistException("Offset table is excessively long.") + + if self.trailer.offsetSize > 16: + raise InvalidPlistException("Offset size is greater than maximum integer size.") + + if self.trailer.objectRefSize == 0: + raise InvalidPlistException("Object reference size is zero.") + + if offset >= len(self.contents) - 32: + raise InvalidPlistException("Offset table offset is too large.") + + if offset < len("bplist00x"): + raise InvalidPlistException("Offset table offset is too small.") + + if self.trailer.topLevelObjectNumber >= self.trailer.offsetCount: + raise InvalidPlistException("Top level object number is larger than the number of objects.") + + offset_contents = self.contents[offset:offset+offset_size] + offset_i = 0 + offset_table_length = len(offset_contents) + + while offset_i < self.trailer.offsetCount: + begin = self.trailer.offsetSize*offset_i + end = begin+self.trailer.offsetSize + if end > offset_table_length: + raise InvalidPlistException("End of object is at invalid offset %d in offset table of length %d" % (end, offset_table_length)) + tmp_contents = offset_contents[begin:end] + tmp_sized = self.getSizedInteger(tmp_contents, self.trailer.offsetSize) + self.offsets.append(tmp_sized) + offset_i += 1 + self.setCurrentOffsetToObjectNumber(self.trailer.topLevelObjectNumber) + result = self.readObject() + except TypeError as e: + raise InvalidPlistException(e) + return result + + def setCurrentOffsetToObjectNumber(self, objectNumber): + if objectNumber > len(self.offsets) - 1: + raise InvalidPlistException("Invalid offset number: %d" % objectNumber) + self.currentOffset = self.offsets[objectNumber] + if self.currentOffset in self.offsetsStack: + raise InvalidPlistException("Recursive data structure detected in object: %d" % objectNumber) + + def beginOffsetProtection(self): + self.offsetsStack.append(self.currentOffset) + return self.currentOffset + + def endOffsetProtection(self, offset): + try: + index = self.offsetsStack.index(offset) + self.offsetsStack = self.offsetsStack[:index] + except ValueError as e: + pass + + def readObject(self): + protection = self.beginOffsetProtection() + result = None + tmp_byte = self.contents[self.currentOffset:self.currentOffset+1] + if len(tmp_byte) != 1: + raise InvalidPlistException("No object found at offset: %d" % self.currentOffset) + marker_byte = unpack("!B", tmp_byte)[0] + format = (marker_byte >> 4) & 0x0f + extra = marker_byte & 0x0f + self.currentOffset += 1 + + def proc_extra(extra): + if extra == 0b1111: + extra = self.readObject() + return extra + + # bool, null, or fill byte + if format == 0b0000: + if extra == 0b0000: + result = None + elif extra == 0b1000: + result = False + elif extra == 0b1001: + result = True + elif extra == 0b1111: + pass # fill byte + else: + raise InvalidPlistException("Invalid object found at offset: %d" % (self.currentOffset - 1)) + # int + elif format == 0b0001: + result = self.readInteger(pow(2, extra)) + # real + elif format == 0b0010: + result = self.readReal(extra) + # date + elif format == 0b0011 and extra == 0b0011: + result = self.readDate() + # data + elif format == 0b0100: + extra = proc_extra(extra) + result = self.readData(extra) + # ascii string + elif format == 0b0101: + extra = proc_extra(extra) + result = self.readAsciiString(extra) + # Unicode string + elif format == 0b0110: + extra = proc_extra(extra) + result = self.readUnicode(extra) + # uid + elif format == 0b1000: + result = self.readUid(extra) + # array + elif format == 0b1010: + extra = proc_extra(extra) + result = self.readArray(extra) + # set + elif format == 0b1100: + extra = proc_extra(extra) + result = set(self.readArray(extra)) + # dict + elif format == 0b1101: + extra = proc_extra(extra) + result = self.readDict(extra) + else: + raise InvalidPlistException("Invalid object found: {format: %s, extra: %s}" % (bin(format), bin(extra))) + self.endOffsetProtection(protection) + return result + + def readContents(self, length, description="Object contents"): + end = self.currentOffset + length + if end >= len(self.contents) - 32: + raise InvalidPlistException("%s extends into trailer" % description) + elif length < 0: + raise InvalidPlistException("%s length is less than zero" % length) + data = self.contents[self.currentOffset:end] + return data + + def readInteger(self, byteSize): + data = self.readContents(byteSize, "Integer") + self.currentOffset = self.currentOffset + byteSize + return self.getSizedInteger(data, byteSize, as_number=True) + + def readReal(self, length): + to_read = pow(2, length) + data = self.readContents(to_read, "Real") + if length == 2: # 4 bytes + result = unpack('>f', data)[0] + elif length == 3: # 8 bytes + result = unpack('>d', data)[0] + else: + raise InvalidPlistException("Unknown Real of length %d bytes" % to_read) + return result + + def readRefs(self, count): + refs = [] + i = 0 + while i < count: + fragment = self.readContents(self.trailer.objectRefSize, "Object reference") + ref = self.getSizedInteger(fragment, len(fragment)) + refs.append(ref) + self.currentOffset += self.trailer.objectRefSize + i += 1 + return refs + + def readArray(self, count): + if not isinstance(count, (int, long)): + raise InvalidPlistException("Count of entries in dict isn't of integer type.") + result = [] + values = self.readRefs(count) + i = 0 + while i < len(values): + self.setCurrentOffsetToObjectNumber(values[i]) + value = self.readObject() + result.append(value) + i += 1 + return result + + def readDict(self, count): + if not isinstance(count, (int, long)): + raise InvalidPlistException("Count of keys/values in dict isn't of integer type.") + result = {} + keys = self.readRefs(count) + values = self.readRefs(count) + i = 0 + while i < len(keys): + self.setCurrentOffsetToObjectNumber(keys[i]) + key = self.readObject() + self.setCurrentOffsetToObjectNumber(values[i]) + value = self.readObject() + result[key] = value + i += 1 + return result + + def readAsciiString(self, length): + if not isinstance(length, (int, long)): + raise InvalidPlistException("Length of ASCII string isn't of integer type.") + data = self.readContents(length, "ASCII string") + result = unpack("!%ds" % length, data)[0] + self.currentOffset += length + return str(result.decode('ascii')) + + def readUnicode(self, length): + if not isinstance(length, (int, long)): + raise InvalidPlistException("Length of Unicode string isn't of integer type.") + actual_length = length*2 + data = self.readContents(actual_length, "Unicode string") + self.currentOffset += actual_length + return data.decode('utf_16_be') + + def readDate(self): + data = self.readContents(8, "Date") + x = unpack(">d", data)[0] + if math.isnan(x): + raise InvalidPlistException("Date is NaN") + # Use timedelta to workaround time_t size limitation on 32-bit python. + try: + result = datetime.timedelta(seconds=x) + apple_reference_date + except OverflowError: + if x > 0: + result = datetime.datetime.max + else: + result = datetime.datetime.min + self.currentOffset += 8 + return result + + def readData(self, length): + if not isinstance(length, (int, long)): + raise InvalidPlistException("Length of data isn't of integer type.") + result = self.readContents(length, "Data") + self.currentOffset += length + return Data(result) + + def readUid(self, length): + if not isinstance(length, (int, long)): + raise InvalidPlistException("Uid length isn't of integer type.") + return Uid(self.readInteger(length+1)) + + def getSizedInteger(self, data, byteSize, as_number=False): + """Numbers of 8 bytes are signed integers when they refer to numbers, but unsigned otherwise.""" + result = 0 + if byteSize == 0: + raise InvalidPlistException("Encountered integer with byte size of 0.") + # 1, 2, and 4 byte integers are unsigned + elif byteSize == 1: + result = unpack('>B', data)[0] + elif byteSize == 2: + result = unpack('>H', data)[0] + elif byteSize == 4: + result = unpack('>L', data)[0] + elif byteSize == 8: + if as_number: + result = unpack('>q', data)[0] + else: + result = unpack('>Q', data)[0] + elif byteSize <= 16: + # Handle odd-sized or integers larger than 8 bytes + # Don't naively go over 16 bytes, in order to prevent infinite loops. + result = 0 + if hasattr(int, 'from_bytes'): + result = int.from_bytes(data, 'big') + else: + for byte in data: + if not isinstance(byte, int): # Python3.0-3.1.x return ints, 2.x return str + byte = unpack_from('>B', byte)[0] + result = (result << 8) | byte + else: + raise InvalidPlistException("Encountered integer longer than 16 bytes.") + return result + +class HashableWrapper(object): + def __init__(self, value): + self.value = value + def __repr__(self): + return "" % [self.value] + +class BoolWrapper(object): + def __init__(self, value): + self.value = value + def __repr__(self): + return "" % self.value + +class FloatWrapper(object): + _instances = {} + def __new__(klass, value): + # Ensure FloatWrapper(x) for a given float x is always the same object + wrapper = klass._instances.get(value) + if wrapper is None: + wrapper = object.__new__(klass) + wrapper.value = value + klass._instances[value] = wrapper + return wrapper + def __repr__(self): + return "" % self.value + +class StringWrapper(object): + __instances = {} + + encodedValue = None + encoding = None + + def __new__(cls, value): + '''Ensure we only have a only one instance for any string, + and that we encode ascii as 1-byte-per character when possible''' + + encodedValue = None + + for encoding in ('ascii', 'utf_16_be'): + try: + encodedValue = value.encode(encoding) + except: pass + if encodedValue is not None: + if encodedValue not in cls.__instances: + cls.__instances[encodedValue] = super(StringWrapper, cls).__new__(cls) + cls.__instances[encodedValue].encodedValue = encodedValue + cls.__instances[encodedValue].encoding = encoding + return cls.__instances[encodedValue] + + raise ValueError('Unable to get ascii or utf_16_be encoding for %s' % repr(value)) + + def __len__(self): + '''Return roughly the number of characters in this string (half the byte length)''' + if self.encoding == 'ascii': + return len(self.encodedValue) + else: + return len(self.encodedValue)//2 + + def __lt__(self, other): + return self.encodedValue < other.encodedValue + + @property + def encodingMarker(self): + if self.encoding == 'ascii': + return 0b0101 + else: + return 0b0110 + + def __repr__(self): + return '' % (self.encoding, self.encodedValue) + +class PlistWriter(object): + header = b'bplist00bybiplist1.0' + file = None + byteCounts = None + trailer = None + computedUniques = None + writtenReferences = None + referencePositions = None + wrappedTrue = None + wrappedFalse = None + # Used to detect recursive object references. + objectsStack = [] + + def __init__(self, file): + self.reset() + self.file = file + self.wrappedTrue = BoolWrapper(True) + self.wrappedFalse = BoolWrapper(False) + + def reset(self): + self.byteCounts = PlistByteCounts(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) + self.trailer = PlistTrailer(0, 0, 0, 0, 0) + + # A set of all the uniques which have been computed. + self.computedUniques = set() + # A list of all the uniques which have been written. + self.writtenReferences = {} + # A dict of the positions of the written uniques. + self.referencePositions = {} + + self.objectsStack = [] + + def positionOfObjectReference(self, obj): + """If the given object has been written already, return its + position in the offset table. Otherwise, return None.""" + return self.writtenReferences.get(obj) + + def writeRoot(self, root): + """ + Strategy is: + - write header + - wrap root object so everything is hashable + - compute size of objects which will be written + - need to do this in order to know how large the object refs + will be in the list/dict/set reference lists + - write objects + - keep objects in writtenReferences + - keep positions of object references in referencePositions + - write object references with the length computed previously + - computer object reference length + - write object reference positions + - write trailer + """ + output = self.header + wrapped_root = self.wrapRoot(root) + self.computeOffsets(wrapped_root, asReference=True, isRoot=True) + self.trailer = self.trailer._replace(**{'objectRefSize':self.intSize(len(self.computedUniques))}) + self.writeObjectReference(wrapped_root, output) + output = self.writeObject(wrapped_root, output, setReferencePosition=True) + + # output size at this point is an upper bound on how big the + # object reference offsets need to be. + self.trailer = self.trailer._replace(**{ + 'offsetSize':self.intSize(len(output)), + 'offsetCount':len(self.computedUniques), + 'offsetTableOffset':len(output), + 'topLevelObjectNumber':0 + }) + + output = self.writeOffsetTable(output) + output += pack('!xxxxxxBBQQQ', *self.trailer) + self.file.write(output) + + def beginRecursionProtection(self, obj): + if not isinstance(obj, (set, dict, list, tuple)): + return + if id(obj) in self.objectsStack: + raise InvalidPlistException("Recursive containers are not allowed in plists.") + self.objectsStack.append(id(obj)) + + def endRecursionProtection(self, obj): + if not isinstance(obj, (set, dict, list, tuple)): + return + try: + index = self.objectsStack.index(id(obj)) + self.objectsStack = self.objectsStack[:index] + except ValueError as e: + pass + + def wrapRoot(self, root): + result = None + self.beginRecursionProtection(root) + + if isinstance(root, bool): + if root is True: + result = self.wrappedTrue + else: + result = self.wrappedFalse + elif isinstance(root, float): + result = FloatWrapper(root) + elif isinstance(root, set): + n = set() + for value in root: + n.add(self.wrapRoot(value)) + result = HashableWrapper(n) + elif isinstance(root, dict): + n = {} + for key, value in iteritems(root): + n[self.wrapRoot(key)] = self.wrapRoot(value) + result = HashableWrapper(n) + elif isinstance(root, list): + n = [] + for value in root: + n.append(self.wrapRoot(value)) + result = HashableWrapper(n) + elif isinstance(root, tuple): + n = tuple([self.wrapRoot(value) for value in root]) + result = HashableWrapper(n) + elif isinstance(root, (str, unicode)) and not isinstance(root, Data): + result = StringWrapper(root) + elif isinstance(root, bytes): + result = Data(root) + else: + result = root + + self.endRecursionProtection(root) + return result + + def incrementByteCount(self, field, incr=1): + self.byteCounts = self.byteCounts._replace(**{field:self.byteCounts.__getattribute__(field) + incr}) + + def computeOffsets(self, obj, asReference=False, isRoot=False): + def check_key(key): + if key is None: + raise InvalidPlistException('Dictionary keys cannot be null in plists.') + elif isinstance(key, Data): + raise InvalidPlistException('Data cannot be dictionary keys in plists.') + elif not isinstance(key, StringWrapper): + raise InvalidPlistException('Keys must be strings.') + + def proc_size(size): + if size > 0b1110: + size += self.intSize(size) + return size + # If this should be a reference, then we keep a record of it in the + # uniques table. + if asReference: + if obj in self.computedUniques: + return + else: + self.computedUniques.add(obj) + + if obj is None: + self.incrementByteCount('nullBytes') + elif isinstance(obj, BoolWrapper): + self.incrementByteCount('boolBytes') + elif isinstance(obj, Uid): + size = self.intSize(obj.integer) + self.incrementByteCount('uidBytes', incr=1+size) + elif isinstance(obj, (int, long)): + size = self.intSize(obj) + self.incrementByteCount('intBytes', incr=1+size) + elif isinstance(obj, FloatWrapper): + size = self.realSize(obj) + self.incrementByteCount('realBytes', incr=1+size) + elif isinstance(obj, datetime.datetime): + self.incrementByteCount('dateBytes', incr=2) + elif isinstance(obj, Data): + size = proc_size(len(obj)) + self.incrementByteCount('dataBytes', incr=1+size) + elif isinstance(obj, StringWrapper): + size = proc_size(len(obj)) + self.incrementByteCount('stringBytes', incr=1+size) + elif isinstance(obj, HashableWrapper): + obj = obj.value + if isinstance(obj, set): + size = proc_size(len(obj)) + self.incrementByteCount('setBytes', incr=1+size) + for value in obj: + self.computeOffsets(value, asReference=True) + elif isinstance(obj, (list, tuple)): + size = proc_size(len(obj)) + self.incrementByteCount('arrayBytes', incr=1+size) + for value in obj: + asRef = True + self.computeOffsets(value, asReference=True) + elif isinstance(obj, dict): + size = proc_size(len(obj)) + self.incrementByteCount('dictBytes', incr=1+size) + for key, value in iteritems(obj): + check_key(key) + self.computeOffsets(key, asReference=True) + self.computeOffsets(value, asReference=True) + else: + raise InvalidPlistException("Unknown object type: %s (%s)" % (type(obj).__name__, repr(obj))) + + def writeObjectReference(self, obj, output): + """Tries to write an object reference, adding it to the references + table. Does not write the actual object bytes or set the reference + position. Returns a tuple of whether the object was a new reference + (True if it was, False if it already was in the reference table) + and the new output. + """ + position = self.positionOfObjectReference(obj) + if position is None: + self.writtenReferences[obj] = len(self.writtenReferences) + output += self.binaryInt(len(self.writtenReferences) - 1, byteSize=self.trailer.objectRefSize) + return (True, output) + else: + output += self.binaryInt(position, byteSize=self.trailer.objectRefSize) + return (False, output) + + def writeObject(self, obj, output, setReferencePosition=False): + """Serializes the given object to the output. Returns output. + If setReferencePosition is True, will set the position the + object was written. + """ + def proc_variable_length(format, length): + result = b'' + if length > 0b1110: + result += pack('!B', (format << 4) | 0b1111) + result = self.writeObject(length, result) + else: + result += pack('!B', (format << 4) | length) + return result + + def timedelta_total_seconds(td): + # Shim for Python 2.6 compatibility, which doesn't have total_seconds. + # Make one argument a float to ensure the right calculation. + return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10.0**6) / 10.0**6 + + if setReferencePosition: + self.referencePositions[obj] = len(output) + + if obj is None: + output += pack('!B', 0b00000000) + elif isinstance(obj, BoolWrapper): + if obj.value is False: + output += pack('!B', 0b00001000) + else: + output += pack('!B', 0b00001001) + elif isinstance(obj, Uid): + size = self.intSize(obj.integer) + output += pack('!B', (0b1000 << 4) | size - 1) + output += self.binaryInt(obj.integer) + elif isinstance(obj, (int, long)): + byteSize = self.intSize(obj) + root = math.log(byteSize, 2) + output += pack('!B', (0b0001 << 4) | int(root)) + output += self.binaryInt(obj, as_number=True) + elif isinstance(obj, FloatWrapper): + # just use doubles + output += pack('!B', (0b0010 << 4) | 3) + output += self.binaryReal(obj) + elif isinstance(obj, datetime.datetime): + try: + timestamp = (obj - apple_reference_date).total_seconds() + except AttributeError: + timestamp = timedelta_total_seconds(obj - apple_reference_date) + output += pack('!B', 0b00110011) + output += pack('!d', float(timestamp)) + elif isinstance(obj, Data): + output += proc_variable_length(0b0100, len(obj)) + output += obj + elif isinstance(obj, StringWrapper): + output += proc_variable_length(obj.encodingMarker, len(obj)) + output += obj.encodedValue + elif isinstance(obj, bytes): + output += proc_variable_length(0b0101, len(obj)) + output += obj + elif isinstance(obj, HashableWrapper): + obj = obj.value + if isinstance(obj, (set, list, tuple)): + if isinstance(obj, set): + output += proc_variable_length(0b1100, len(obj)) + else: + output += proc_variable_length(0b1010, len(obj)) + + objectsToWrite = [] + for objRef in sorted(obj) if isinstance(obj, set) else obj: + (isNew, output) = self.writeObjectReference(objRef, output) + if isNew: + objectsToWrite.append(objRef) + for objRef in objectsToWrite: + output = self.writeObject(objRef, output, setReferencePosition=True) + elif isinstance(obj, dict): + output += proc_variable_length(0b1101, len(obj)) + keys = [] + values = [] + objectsToWrite = [] + for key, value in sorted(iteritems(obj)): + keys.append(key) + values.append(value) + for key in keys: + (isNew, output) = self.writeObjectReference(key, output) + if isNew: + objectsToWrite.append(key) + for value in values: + (isNew, output) = self.writeObjectReference(value, output) + if isNew: + objectsToWrite.append(value) + for objRef in objectsToWrite: + output = self.writeObject(objRef, output, setReferencePosition=True) + return output + + def writeOffsetTable(self, output): + """Writes all of the object reference offsets.""" + all_positions = [] + writtenReferences = list(self.writtenReferences.items()) + writtenReferences.sort(key=lambda x: x[1]) + for obj,order in writtenReferences: + # Porting note: Elsewhere we deliberately replace empty unicdoe strings + # with empty binary strings, but the empty unicode string + # goes into writtenReferences. This isn't an issue in Py2 + # because u'' and b'' have the same hash; but it is in + # Py3, where they don't. + if bytes != str and obj == unicodeEmpty: + obj = b'' + position = self.referencePositions.get(obj) + if position is None: + raise InvalidPlistException("Error while writing offsets table. Object not found. %s" % obj) + output += self.binaryInt(position, self.trailer.offsetSize) + all_positions.append(position) + return output + + def binaryReal(self, obj): + # just use doubles + result = pack('>d', obj.value) + return result + + def binaryInt(self, obj, byteSize=None, as_number=False): + result = b'' + if byteSize is None: + byteSize = self.intSize(obj) + if byteSize == 1: + result += pack('>B', obj) + elif byteSize == 2: + result += pack('>H', obj) + elif byteSize == 4: + result += pack('>L', obj) + elif byteSize == 8: + if as_number: + result += pack('>q', obj) + else: + result += pack('>Q', obj) + elif byteSize <= 16: + try: + result = pack('>Q', 0) + pack('>Q', obj) + except struct_error as e: + raise InvalidPlistException("Unable to pack integer %d: %s" % (obj, e)) + else: + raise InvalidPlistException("Core Foundation can't handle integers with size greater than 16 bytes.") + return result + + def intSize(self, obj): + """Returns the number of bytes necessary to store the given integer.""" + # SIGNED + if obj < 0: # Signed integer, always 8 bytes + return 8 + # UNSIGNED + elif obj <= 0xFF: # 1 byte + return 1 + elif obj <= 0xFFFF: # 2 bytes + return 2 + elif obj <= 0xFFFFFFFF: # 4 bytes + return 4 + # SIGNED + # 0x7FFFFFFFFFFFFFFF is the max. + elif obj <= 0x7FFFFFFFFFFFFFFF: # 8 bytes signed + return 8 + elif obj <= 0xffffffffffffffff: # 8 bytes unsigned + return 16 + else: + raise InvalidPlistException("Core Foundation can't handle integers with size greater than 8 bytes.") + + def realSize(self, obj): + return 8 diff --git a/IngestModules/MacOSX_Account_Parser/macosx_account_parser.py b/IngestModules/MacOSX_Account_Parser/macosx_account_parser.py new file mode 100644 index 0000000..8c5390d --- /dev/null +++ b/IngestModules/MacOSX_Account_Parser/macosx_account_parser.py @@ -0,0 +1,421 @@ +""" +Copyright 2020 Luke Gaddie + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated +documentation files (the "Software"), to deal in the Software without restriction, including without limitation +the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, +and to permit persons to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the +Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE +WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS +OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +""" + +import os +import inspect +from biplist import readPlist, NotBinaryPlistException, InvalidPlistException +from StringIO import StringIO + +from java.io import File +from java.util.logging import Level +from org.sleuthkit.datamodel import BlackboardArtifact +from org.sleuthkit.datamodel import BlackboardAttribute +from org.sleuthkit.autopsy.ingest import IngestModule +from org.sleuthkit.autopsy.ingest import DataSourceIngestModule +from org.sleuthkit.autopsy.ingest import IngestModuleFactoryAdapter +from org.sleuthkit.autopsy.ingest import IngestMessage +from org.sleuthkit.autopsy.ingest import IngestServices +from org.sleuthkit.autopsy.coreutils import Logger +from org.sleuthkit.autopsy.casemodule import Case +from org.sleuthkit.autopsy.casemodule.services import Blackboard +from org.sleuthkit.autopsy.datamodel import ContentUtils + + +class OSXAccountParserDataSourceIngestModuleFactory(IngestModuleFactoryAdapter): + moduleName = "MacOSX Account Parser" + + def getModuleDisplayName(self): + return self.moduleName + + def getModuleDescription(self): + return "Extract user account information and account shadows from OSX v10.8+ for hashcat cracking." + + def getModuleVersionNumber(self): + return "1.0" + + def isDataSourceIngestModuleFactory(self): + return True + + def createDataSourceIngestModule(self, ingestOptions): + return OSXAccountParserDataSourceIngestModule() + + +class OSXAccountParserDataSourceIngestModule(DataSourceIngestModule): + _logger = Logger.getLogger(OSXAccountParserDataSourceIngestModuleFactory.moduleName) + + def log(self, level, msg): + self._logger.logp(level, self.__class__.__name__, inspect.stack()[1][3], msg) + + def __init__(self): + self.context = None + + self.osAccountAttributeTypes = { + 'home': { + 'attr_key': 'TSK_HOME_DIRECTORY', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.STRING, + 'display_name': 'Home Directory', + 'custom': True, + }, + 'shell': { + 'attr_key': 'TSK_SHELL', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.STRING, + 'display_name': 'Shell', + 'custom': True, + }, + 'hint': { + 'attr_key': 'TSK_PASSWORD_HINT', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.STRING, + 'display_name': 'Password Hint', + 'custom': True, + }, + 'failedLoginTimestamp': { + 'attr_key': 'TSK_FAILED_LOGIN_TIMESTAMP', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.DATETIME, + 'display_name': 'Last Failed Login', + 'custom': True, + }, + 'failedLoginCount': { + 'attr_key': 'TSK_FAILED_LOGIN_COUNT', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.LONG, + 'display_name': 'Failed Login Count', + 'custom': True, + }, + 'passwordLastSetTime': { + 'attr_key': 'TSK_PASSWORD_LAST_SET_TIME', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.DATETIME, + 'display_name': 'Password Last Set', + 'custom': True, + }, + 'generateduuid': { + 'attr_key': 'TSK_GENERATED_UUID', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.STRING, + 'display_name': 'Generated UUID', + 'custom': True, + }, + 'IsHidden': { + 'attr_key': 'TSK_IS_HIDDEN', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.STRING, + 'display_name': 'Hidden', + 'custom': True, + }, + 'creationTime': { + 'attr_key': 'TSK_DATETIME_CREATED', + }, + 'realname': { + 'attr_key': 'TSK_NAME', + }, + 'uid': { + 'attr_key': 'TSK_USER_ID', + }, + 'name': { + 'attr_key': 'TSK_USER_NAME', + }, + } + + self.hashedCredentialAttributeTypes = { + 'hashType': { + 'attr_key': 'TSK_HASH_TYPE', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.STRING, + 'display_name': 'Hash Type', + 'custom': True, + }, + 'salt': { + 'attr_key': 'TSK_SALT', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.STRING, + 'display_name': 'Salt', + 'custom': True, + }, + 'iterations': { + 'attr_key': 'TSK_ITERATIONS', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.LONG, + 'display_name': 'Iterations', + 'custom': True, + }, + 'entropy': { + 'attr_key': 'TSK_HASH_ENTROPY', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.STRING, + 'display_name': 'Entropy', + 'custom': True, + }, + 'verifier': { + 'attr_key': 'TSK_VERIFIER', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.STRING, + 'display_name': 'Verifier', + 'custom': True, + }, + 'hashcatEntry': { + 'attr_key': 'TSK_HASHCAT_ENTRY', + 'attr_type': BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.STRING, + 'display_name': 'Hashcat Entry', + 'custom': True, + }, + + } + + self.moduleName = "MacOSX Account Parser" + self.temporary_dir = os.path.join(Case.getCurrentCase().getTempDirectory(), self.moduleName.replace(' ', '_')) + + self.case = Case.getCurrentCase().getSleuthkitCase() + self.file_manager = Case.getCurrentCase().getServices().getFileManager() + self.blackboard = Case.getCurrentCase().getSleuthkitCase().getBlackboard() + + def startUp(self, context): + self.context = context + + def process(self, dataSource, progressBar): + + try: + os.mkdir(self.temporary_dir) + except: + pass + + progressBar.switchToIndeterminate() + + self.setup_custom_artifact_types() + self.setup_custom_attribute_types() + + filesProcessed = 0 + + files = self.file_manager.findFiles(dataSource, "%.plist", "%var/db/dslocal/nodes/Default/users/") + + totalNumberFiles = len(files) + progressBar.switchToDeterminate(totalNumberFiles) + + self.log(Level.INFO, "Found " + str(totalNumberFiles) + " files to process.") + for file in files: + self.log(Level.INFO, "Processing %s" % file.getName()) + + # Check if the user pressed cancel while we were busy + if self.context.isJobCancelled(): + return IngestModule.ProcessResult.OK + + # Copy the Plist file to a temporary directory to work with + tmpPlistFile = self.copy_to_temp_directory(file) + self.log(Level.INFO, "Reading %s as a plist" % tmpPlistFile) + + try: + # Read the Plist file using biplist + plist = readPlist(tmpPlistFile) + + # Extract all of the plist data that we can + extractedData = self.extract_plist_data(plist) + + # Each Plist file gets a generic TSK_OS_ACCOUNT Artifact Type + osAccountArtifact = file.newArtifact(BlackboardArtifact.ARTIFACT_TYPE.TSK_OS_ACCOUNT) + osArtifactAttributes = [] + + # We can iterate over any expected attribute types and assign them to the artifact. + for dictKey in self.osAccountAttributeTypes: + try: + osArtifactAttributes.append(BlackboardAttribute( + self.case.getAttributeType(self.osAccountAttributeTypes[dictKey]['attr_key']), + self.moduleName, extractedData[dictKey])) + except KeyError: + # Discarding the attribute type if, for whatever reason, they're not in the Plist. + pass + + # When we're done, go ahead and add them to the OS Account Artifact. We'll post it later. + osAccountArtifact.addAttributes(osArtifactAttributes) + + # An account shadow can have multiple hashes (e.g. SALTED-SHA512-PBKDF2 & SRP-RFC5054-4096-SHA512-PBKDF2) + # so we'll create an array to handle them all, then add them all at the end. + hashedCredArtifacts = [] + + # For each extracted hash + for shadow in extractedData['shadows']: + # Create a new artifact using our custom TSK_HASHED_CREDENTIAL artifact type we set up earlier. + hashedCredArtifact = file.newArtifact(self.case.getArtifactTypeID("TSK_HASHED_CREDENTIAL")) + + hashedCredArtifactAttributes = [] + # We can iterate over any expected attribute types and assign them to the artifact. + for dictKey in self.hashedCredentialAttributeTypes: + try: + hashedCredArtifactAttributes.append(BlackboardAttribute( + self.case.getAttributeType( + self.hashedCredentialAttributeTypes[dictKey]['attr_key']), + self.moduleName, shadow[dictKey] + )) + except KeyError: + # Discarding the attribute type if, for whatever reason, they're not in the Plist. + pass + + # Add the attributes to the artifact. + hashedCredArtifact.addAttributes(hashedCredArtifactAttributes) + # and add our artifact to the array of found shadows for the account. + hashedCredArtifacts.append(hashedCredArtifact) + + try: + # Post our extracted account information. + self.blackboard.postArtifact(osAccountArtifact, self.moduleName) + + # Then iterate over our harvested credential hashes for the account, posting them. + for hashedCredArtifact in hashedCredArtifacts: + self.blackboard.postArtifact(hashedCredArtifact, self.moduleName) + + except Blackboard.BlackboardException: + self.log(Level.SEVERE, + "Unable to index blackboard artifact " + str(osAccountArtifact.getArtifactTypeName())) + + except (InvalidPlistException, NotBinaryPlistException), e: + self.log(Level.INFO, "Unable to parse %s as a Plist file. Skipping." % file.getName()) + + # We're done processing the Plist file, clean it up from our temporary directory. + self.remove_from_temp_directory(file) + + # Update the progress bar, as progress has been made. + filesProcessed += 1 + progressBar.progress(filesProcessed) + + # We're done. Post a status message for the user. + IngestServices.getInstance().postMessage( + IngestMessage.createMessage(IngestMessage.MessageType.DATA, self.moduleName, + "Done processing %d OSX user accounts." % totalNumberFiles)) + + return IngestModule.ProcessResult.OK + + # Given a Plist object obtained from biplist, iterate through and extract the information we're interested in. + def extract_plist_data(self, plist): + # Basic shell, will be returned at the end of all of this. + extractedInformation = {'shadows': []} + + # Keys in the Plist that we're going to be extracting as strings. + interestingStrKeys = ['uid', 'home', 'shell', 'realname', 'uid', 'hint', 'name', 'generateduuid', 'IsHidden'] + + # Plist objects are stored values in an array by default. + # If they don't exist, set them as an empty array, otherwise we really do nothing. + for key in interestingStrKeys: + try: + extractedInformation[key] = plist.setdefault(key, [])[0] + except (IndexError, KeyError): + pass + + # accountPolicyData is where some basic information about the account is stored. + if 'accountPolicyData' in plist and len(plist['accountPolicyData']): + accountPolicyData = self.readPlistFromString(plist['accountPolicyData'][0]) + + # Timestamp keys that we're interested in. + interestingTsKeys = ['failedLoginTimestamp', 'creationTime', 'passwordLastSetTime'] + # Integer keys that we're interested in. + interestingIntKeys = ['failedLoginCount'] + + for key in interestingIntKeys: + if key in accountPolicyData: + extractedInformation[key] = accountPolicyData[key] + + for key in interestingTsKeys: + if key in accountPolicyData: + # Convert the String into a Long for Autopsy + extractedInformation[key] = long(float(accountPolicyData[key])) + + # ShadowHashData is where the account credentials are stored. + if 'ShadowHashData' in plist: + try: + # as a Plist inside of the current Plist. Plist-ception. + shadowHashPlist = self.readPlistFromString(plist['ShadowHashData'][0]) + # Multiple hash types can be stored inside of here - we want all of them. + for hashType in shadowHashPlist: + hashDetails = { + 'hashType': hashType, + 'salt': '', + 'entropy': '', + 'iterations': '', + 'verifier': '', + # hashcatEntry is not stored in the ShadowHashData - we'll be generating it later. + 'hashcatEntry': '', + } + + for key in shadowHashPlist[hashType]: + # We'll want to convert these into hex for storage + if key in ['salt', 'entropy', 'verifier']: + shadowHashPlist[hashType][key] = shadowHashPlist[hashType][key].encode('hex') + + # Add what we find to our results + hashDetails[key] = shadowHashPlist[hashType][key] + + # If the hash is of type SALTED-SHA512-PBKDF2, + # then we generate the hash that we would feed to Hashcat in the form of: + # $ml$(iterations)$(salt)$(first 128 bits of entropy) + if hashDetails['hashType'] == 'SALTED-SHA512-PBKDF2': + hashDetails['hashcatEntry'] = "$ml$%s$%s$%s" % ( + hashDetails['iterations'], hashDetails['salt'], hashDetails['entropy'][:128]) + else: + hashDetails['hashcatEntry'] = '' + + # Add it to our list of found shadows + extractedInformation['shadows'].append(hashDetails) + + except (InvalidPlistException, NotBinaryPlistException), e: + print "Not a plist:", e + return extractedInformation + + def setup_custom_attribute_types(self): + self.log(Level.INFO, "Setting up custom attribute types.") + # Set up custom attribute types of OS Accounts + for attribute in self.osAccountAttributeTypes: + if self.osAccountAttributeTypes[attribute].setdefault('custom', False): + self.create_custom_attribute_type(self.osAccountAttributeTypes[attribute]['attr_key'], + self.osAccountAttributeTypes[attribute]['attr_type'], + self.osAccountAttributeTypes[attribute]['display_name']) + + # Set up custom attribute types for hashed credentials. + for attribute in self.hashedCredentialAttributeTypes: + if self.hashedCredentialAttributeTypes[attribute].setdefault('custom', False): + self.create_custom_attribute_type(self.hashedCredentialAttributeTypes[attribute]['attr_key'], + self.hashedCredentialAttributeTypes[attribute]['attr_type'], + self.hashedCredentialAttributeTypes[attribute]['display_name']) + + self.log(Level.INFO, 'Done setting up custom attribute types.') + + def create_custom_attribute_type(self, attr_key, attr_type, attr_display_name): + try: + self.case.addArtifactAttributeType(attr_key, attr_type, attr_display_name) + except: + self.log(Level.INFO, + "Exception while creating the \"%s\" Attribute Type." % + attr_display_name) + + def setup_custom_artifact_types(self): + self.log(Level.INFO, "Setting up custom artifact types.") + try: + self.case.addArtifactType("TSK_HASHED_CREDENTIAL", "Hashed Credentials") + except: + self.log(Level.INFO, + "Exception while creating the TSK_HASHED_CREDENTIAL Artifact Type.") + self.log(Level.INFO, "Done setting up custom artifact types.") + + # Read a string as a Plist + # We have to use this instead of the biplist readPlistFromString method, as io.BytesIO is native + def readPlistFromString(self, data): + return readPlist(StringIO(data)) + + # Given a file object, simply copies a file to a temporary location and returns the file path. + def copy_to_temp_directory(self, file): + filepath = self.get_temporary_file_path(file) + ContentUtils.writeToFile(file, File(filepath)) + return filepath + + # Given a file object, removes it from the temporary directory. + def remove_from_temp_directory(self, file): + filepath = self.get_temporary_file_path(file) + try: + os.remove(filepath) + except: + self.log(Level.INFO, "Failed to remove file " + filepath) + + # Returns the location we should be storing temporary files. + def get_temporary_file_path(self, file): + return os.path.join(self.temporary_dir, str(file.getId()) + "-" + file.getName()) diff --git a/IngestModules/Microsoft_Teams_Parser/README.md b/IngestModules/Microsoft_Teams_Parser/README.md new file mode 100644 index 0000000..4eddec1 --- /dev/null +++ b/IngestModules/Microsoft_Teams_Parser/README.md @@ -0,0 +1,13 @@ +- __Description:__ This plugin enumerates Microsoft Teams LevelDB database and extracts information such as: + - Call data + - Messages (chats, posts and comments) and their attachments, such as SharePoint links for files and hyperlinks + - Reactions for messages + - Calendar entries + - Contacts +- __Author:__ Alexander Bilz +- __Minimum Autopsy version:__ 4.18.0 +- __OS's supported on: Windows +- __Module Location__: https://github.com/lxndrblz/forensicsim/ +- __Website:__ https://forensics.im +- __Source Code:__ https://github.com/lxndrblz/forensicsim/ +- __License:__ MIT License \ No newline at end of file diff --git a/IngestModules/Registry-Explorer/README.md b/IngestModules/Registry-Explorer/README.md new file mode 100644 index 0000000..b9c080b --- /dev/null +++ b/IngestModules/Registry-Explorer/README.md @@ -0,0 +1,7 @@ +- __Description:__ Analyze Registry Hives based on bookmarks provided by EricZimmerman for his tool RegistryExplorer. +- __Author:__ Mohammed Hasan (0xmohammedhassan@gmail.com) +- __Minimum Autopsy version:__ 4.19.3 +- __Module Location__: https://github.com/0xMohammed/Autopsy-Registry-Explorer/releases/download/v0.1Beta/RegistryExplorerv0.2Beta.zip +- __Website:__ https://github.com/0xMohammed/Autopsy-Registry-Explorer +- __Source Code:__ https://github.com/0xMohammed/Autopsy-Registry-Explorer +- __License:__ GNU General Public License v3.0 diff --git a/IngestModules/cLeapp/README.md b/IngestModules/cLeapp/README.md new file mode 100644 index 0000000..1c5747b --- /dev/null +++ b/IngestModules/cLeapp/README.md @@ -0,0 +1,8 @@ +- __Description:__ Process ChromeOS using cLeapp program +- __Author:__ Mark McKinnon (Mark dot McKinnon at gmail dot com) +- __Minimum Autopsy version:__ 4.16.0 +- __OS's supported on: Windows +- __Module Location__: https://github.com/markmckinnon/Autopsy-NBM-Plugins/tree/main/cLeapp-Autopsy-Plugin +- __Website:__ https://github.com/markmckinnon/Autopsy-NBM-Plugins/tree/main/cLeapp-Autopsy-Plugin +- __Source Code:__ https://github.com/markmckinnon/Autopsy-NBM-Plugins/tree/main/cLeapp-Autopsy-Plugin +- __License:__ Apache 2.0 License \ No newline at end of file diff --git a/IngestModules/cLeapp/cleappanalyzer.nbm b/IngestModules/cLeapp/cleappanalyzer.nbm new file mode 100644 index 0000000..befe7c6 Binary files /dev/null and b/IngestModules/cLeapp/cleappanalyzer.nbm differ diff --git a/IngestModules/rLeapp/README.md b/IngestModules/rLeapp/README.md new file mode 100644 index 0000000..9abeafe --- /dev/null +++ b/IngestModules/rLeapp/README.md @@ -0,0 +1,8 @@ +- __Description:__ Process Returns and Archives using rLeapp program +- __Author:__ Mark McKinnon (Mark dot McKinnon at gmail dot com) +- __Minimum Autopsy version:__ 4.18.0 +- __OS's supported on: Windows +- __Module Location__: https://github.com/markmckinnon/Autopsy-NBM-Plugins/tree/main/rLeapp-Autopsy-Plugin +- __Website:__ https://github.com/markmckinnon/Autopsy-NBM-Plugins/tree/main/rLeapp-Autopsy-Plugin +- __Source Code:__ https://github.com/markmckinnon/Autopsy-NBM-Plugins/tree/main/rLeapp-Autopsy-Plugin +- __License:__ Apache 2.0 License \ No newline at end of file diff --git a/IngestModules/rLeapp/rleappanalyzer.nbm b/IngestModules/rLeapp/rleappanalyzer.nbm new file mode 100644 index 0000000..068bbf6 Binary files /dev/null and b/IngestModules/rLeapp/rleappanalyzer.nbm differ diff --git a/IngestModules/vLeapp/README.md b/IngestModules/vLeapp/README.md new file mode 100644 index 0000000..d441614 --- /dev/null +++ b/IngestModules/vLeapp/README.md @@ -0,0 +1,8 @@ +- __Description:__ Process Vehicle data using vLeapp program +- __Author:__ Mark McKinnon (Mark dot McKinnon at gmail dot com) +- __Minimum Autopsy version:__ 4.18.0 +- __OS's supported on: Windows +- __Module Location__: https://github.com/markmckinnon/Autopsy-NBM-Plugins/tree/main/vLeapp-Autopsy-Plugin +- __Website:__ https://github.com/markmckinnon/Autopsy-NBM-Plugins/tree/main/vLeapp-Autopsy-Plugin +- __Source Code:__ https://github.com/markmckinnon/Autopsy-NBM-Plugins/tree/main/vLeapp-Autopsy-Plugin +- __License:__ Apache 2.0 License \ No newline at end of file diff --git a/IngestModules/vLeapp/vleappanalyzer.nbm b/IngestModules/vLeapp/vleappanalyzer.nbm new file mode 100644 index 0000000..dedaea0 Binary files /dev/null and b/IngestModules/vLeapp/vleappanalyzer.nbm differ diff --git a/README.md b/README.md index be67384..5cb4313 100644 --- a/README.md +++ b/README.md @@ -1,22 +1,39 @@ -# Autopsy 3rd Party Module Repository +# Autopsy Add-on Modules -This repository contains the 3rd party Autopsy add-on modules. You have two choices for using it. +This repository contains the 3rd party add-on modules to the [Autopsy Digital Forensics Platform](http://www.autopsy.com). Each module has a folder in the repository that contains a README file. Some of the modules are stored in this repository and others are hosted on another site with a link in its README. -1. Make a copy of this repository by downloading a ZIP file of it. You can do this by clicking on "Clone or download" and then "Download ZIP". -![Download Image](images/download.png) +How To Use The Site: +1. Find the module that meets your needs +2. Download and install it -2. You can download specific modules from the site. This is easier for Java NBM modules than it is for Python modules, which may contain a number of files. -The modules are organized by their type. -- Ingest modules analyze files as they are added to the case. This is most common type of module. -- Content viewer modules are in the lower right corner of Autopsy and they display a file or selected item in some way. -- Report modules run at the end of the analysis and can generate various types of reports (or can do various types of analysis). -- Data source processors allow for different types of data sources to be added to a case. +# Finding A Module + +The modules in the repository are organized by their type. +- **Ingest modules** analyze files as they are added to the case. This is most common type of module. +- **Content viewer modules** are in the lower right corner of Autopsy and they display a file or selected item in some way. +- **Report modules** run at the end of the analysis and can generate various types of reports (or can do various types of analysis). +- **Data source processors** allow for different types of data sources to be added to a case. Each module has its own folder with a README.md file that outlines the basics of what the module does. -Instructions for installing a module can be found here: http://sleuthkit.org/autopsy/docs/user-docs/4.9.0/module_install_page.html +You can either navigate the folder structure or use the [Search](https://sleuthkit.github.io/autopsy_addon_modules/) page that will search the contents of the README files. + + +# Downloading A Module + +Once you've found a module, you need to get it. You have two choices for doing that. + +1. Make a copy of this repository by downloading a ZIP file of it. You can do this by clicking on "Clone or download" and then "Download ZIP".
+![Download Image](images/download.png) + +2. You can download specific modules from the site. This is easier for Java NBM modules than it is for Python modules, which may contain a number of files. + +# Installing a Module + +Instructions for installing a module can be found here: http://sleuthkit.org/autopsy/docs/user-docs/latest/module_install_page.html -NOTE: This replaces the wiki page that was here: http://wiki.sleuthkit.org/index.php?title=Autopsy_3rd_Party_Modules +# Updating this Site +If you are a developer and want your module listed on here, then please refer to the +[Instructions for Developers](DocsForDevelopers/DeveloperInstructions.md). -[Instructions for Developers](DocsForDevelopers/DeveloperInstructions.md) diff --git a/ReportModules/ForensicExpertWitnessReport/README.md b/ReportModules/ForensicExpertWitnessReport/README.md index 97fefcd..012495b 100644 --- a/ReportModules/ForensicExpertWitnessReport/README.md +++ b/ReportModules/ForensicExpertWitnessReport/README.md @@ -1,3 +1,5 @@ +- __Known Issue:__ This module will cause Autopsy to lose its Tools menu (April 2020). + - __Description:__ Adds tagged evidence into structured and styled tables automatically and directly inside a forensic expert witness report, whilst coming with three pre-existing forensic expert witness report templates to choose from. - __Author:__ Chris Wipat - __Minimum Autopsy version:__ : 3.0.7