Thursday, December 12, 2013

New MetaDiver beta is out!

  Well there has been a lot happening on the project and otherwise the past few weeks and I'm really excited that the new beta has been released!

Beta v1.0.8 - Release notes

  This release has a lot of improvements. A few are: processing time, PDF's metadata is extracted, error handling is better, and the column's order a bit better so it doesn't look like like a complete mess.

  The code is in place to handle PDF's (you read that right). It's still needs some more testing with it improving each day but so far it seems pretty solid!





Go get it

Requirements: You'll need at least the .Net Framework 4.0 and Windows 7 or later to run it.

You can grab the latest version at easymetadata.com/Downloads.

Enjoy!

Notes

  I have started working on a website to house my various projects. You can check it out at www.easymetadata.com. It's just a simple site right now for something fancier later.

  On some system's Excel claims the xlsx output is corrupt but the data seems fine. If you have problems just use a different output. There are minor UI tweaks, an About screen and a lot of plumbing you don't see.

  Still no install package. Microsoft did away with the simple Setup projects after VS2010 and went to Wix Toolset which I am having to learn. It looks powerful but more work (aka, distraction) than I had planned on. If you are interested in learning more I am planning on putting together a post on Wix once I have a handle on it.


Feedback

*Please send or post Feedback if you find this tool useful, have issues running it or feature requests! Post your feedback to twitter #MetaDiver.

Monday, December 9, 2013

MetaDiver update and next steps

  Let's talk about what's been happening since my first post about MetaDiver. I've been excited to see the views to the blog since the MetaDiver post and that has me feeling there could be a lot of interest in this project.

Headache's  w/PDF

  I've been been putting a lot of testing and research into Shell32's PDF handling. Unfortunately I found that Ad*b* reader's pdfshell.dll is broken or unreliable on 64 bit OS. I know it worked on my two computers until recently so something has changed. I've tried every scenario I could find or think of to no avail to make the 32bit dll work properly with Shell... My takeaway is it's clear that we can't depend on Ad*b*'s dll with Shell32. To avoid this issue altogether I'm pulling the PDF meta-data outside Shell. Below you can see the meta-data I am able to pull so far. Not bad!


 

 For those that would like to see PDF details in Shell and Property->Details in Explorer there is a solid product available. It's trial-ware with a $40 cost to purchase after 30 days. If you are cool with that then go that route, it works very well. It's called PDF-ShellTools you can find it here.

Here's the project status so far!

 Going Forward

  I may only provide the Shell version for free and a Pro version with all the extra work included as a 30 day time based Beta. This is evolving and ongoing so we will see. I want to make sure that the base version of MetaDiver is very functional, reliable and add's value to many people and organizations while remaining simple and strait forward.

Here's where it's at

Working on Website and Wiki for MetaDiver.  <-- Domain and hosting package purchased
Working on Exception logging so I can give you a Manifest success and failures.  <-- ETA is Soon
Working on Bug squashing and speed improvements. <-- ETA is on going.
Working on Exif GPS parsing and decoding using.  
Requested - XML output.
Ability to specify a matter name for organizing information. <-- ETA is Soon

Other features that may or may not be part of a buy up/donate version



Working on Testing PDF handling with library outside shell. <-- ETA is Soon
Database support via ODBC and/or SQLite/OLEDB database table
Other file formats.
Requested - Profile's allowing you to only pull the Field Column's you want.
Column reordering.
Requested - Windows placement memory.

 

Wrapping up

  As you can see a lot is going on. For those interested in a super duper MetaDiver version stay tuned!

New Beta this Friday.

Keep coding

Tuesday, November 26, 2013

Introducing MetaDiver!

I'm pleased to share a program I have been working on for the past few months. MetaDiver extracts metadata from known files such as Office Docs and PDF's using the Windows Shell. Really anything Shell has a dll registered for in theory... It spits out a nice spreadsheet or delimited document with the attributes you are interested in. With more features planned in future releases I'm really excited about the simplicity and power of this product! This isn't a forensic tool in that it doesn't parse the file but it's a great way to take metadata that is a pain to review in most Forensic and IR tools and make it much easier to work with. It's also a great sanity check against your tools. You should be able to mount an image with your favorite tool and point this guy to the files your are interested in. However I have only tested against data on my hard drives that has already been exported using my favorite forensic tools. If it's more than 10-20 documents it could take a while to finish which it percolates so just be patient, grab a beer or coffee and give it time to get through everything. It will tell you when it's done!



Download v1.0.7 or click here.

Info: Right now it's just zipped up without an installer, I hope to add one soon. Just download, unzip and keep the files in the directory with the exe.

Requirements: Windows 7 or later and .NET 4.0.

Please send feedback if you like it, hate it, whatever.

Enjoy!
ddym@redrocktx.com

Sunday, November 17, 2013

Scripting with FTK Filters - Updated


This is an updated post about building Access Data’s FTK Toolkit filters outside of FTK. Access Data probably won’t like this since a bad filter can cause the client to crash if you build the filter wrong. So lets build it with care.

If you are someone familiar with FTK then you have had to work with filters. You may even have broken a keyboard or two. One of the first things that came up when moving to FTK 2+ was how to make filters with a lot of items without getting clickitis as we call it. Building large filters by hand is tedious and time consuming and prone to errors due to copy & paste. Out of shear desperation I decided to write a script to automate building big filters so I could spend more time on analysis and less copy-pasting. There might be a better way of getting this result without filters, if there is please let me know!

The Script


Here is a quick and dirty example in Perl for creating a filter for FTK item numbers (don’t judge my syntax too harshly!). 

##<-Begin Script for FTK 5

#!/usr/bin/perl -w
use DBI;
use strict;
use warnings;
use IO::File;
use File::Copy;

$|=1;
my $path = shift ;

#This filter will set a criteria of matching "any" item number in the list.
print "<?xml version=\"1.0\" encoding=\"UTF-8\"?>
                <exportedFilter xmlns=\"http://www.accessdata.com/ftk2/filters\"><filter name=\"Item #\" matchCriterion=\"any\" id=\"f_1000044\" read_only=\"false\" description=\"\">";
                               
                               
open("FILE","<","$path") or die "can't open: $!\n";
my @items = <FILE>;
               
my $i = 0;
               
foreach my $item_num (@items)            {             
                chomp($item_num);
                #Cleanup spaces after int if they exist otherwise FTK will freak since it expects an integer.
                $item_num =~ s/\s+$//;
                print "<rule position=\"". $i . "\" enabled=\"true\" id=\"a_9000\" operator=\"is\"><one_int value=\"" . $item_num . "\"/></rule>";
                $i++;
}
               
print "</filter><attribute id=\"a_9000\" type=\"int\"><table>cmn_Objects</table><column>ObjectID</column></attribute></exportedFilter>";
##<-End Script
 
The way this works you will just feed in a text file with a line for each item number. Make sure you strip out formatting and white spaces. I attempt this in the script but it’s always best to feed in the cleanest data possible! Nice and simple right?

Not so fast, one catch is the table object and column name and the id’s can change from FTK version to version, so I highlighted them above. The best way to check if any values have changed after an FTK upgrade is to just build a new dummy filter for the item type you want, export it then check the XML. Sometimes FTK gets temperamental and I can’t explain why. If anyone has ideas I’d love the feedback. I’ve been successfully building these filters since FTK 2.x for item numbers, hash’s, etc…

Possible uses


File name matches
Quick hash matches
File Path matching (equivalent is a like '%%' in SQL)
ItemNumber matching to original item when you get lists of itemnumbers during discovery.
Etc...

Exporting to an XML File


If you want the output the script to an xml file, when you run it use this syntax "perl script.pl > filter.xml".
*Using > will send the console output to a file (overwriting any file with the same name that already exists).

Then you can just import the filter.xml into FTK! 

Wrap Up


There you have it, building FTK filters using Perl. Look forward to feedback based on your own experiences. 

Hope you found this post interesting and useful!
-Dave