CT Stroke Data – what was I thinking?

So I’ve been working with some CT data (brain) after working a few years in MRI.  I’ve found this paper http://bit.ly/13idoBH that talks about the problems of acquiring CT and the DICOM (Digital Imaging and Communications in Medicine) format.  I’ll discuss some of the things I’ve encountered trying to get these images into formats that my MRI-based tools can understand.

For those that don’t know about image acquisition, imagine you’ve got a block of cheese and you’re at the deli.  The slicer is the CT scanner,  the cheese – your brain.  Each of the slices is one DICOM image.  Nice cheesy DICOMs.  I want a sandwich.

To those who do imaging, here’s a couple of things I think:

1) MRI’s are spoiled.  They are usually acquired with a lot of specifics and usually are well documented.  They also have a lot of tools to work with them.  I didn’t know how good I had it for a while.

2) Variable slice thickness should be banned.  This refers to you take 5mm slice scans at the bottom of the brain and change it to 10mm later in the scan (and maybe change it back!).  Stop this madness.  That’s like asking the deli guy (or gal) “Hey I want a 1/4 pound thick and then 1/2 pound thin slice and then 1/4 thick on top of that”.  Now I understand that viewing things in finer detail is necessary/desirable and that CTs have low-dose radiation so shorter scanning times/fewer images is better.  

That being said – a lot of patients (and CT is usually a diagnostic population – so yes, patients) are getting upwards of 10 scans.  You’re dosing them.  I’m not glad, but if you’re going to do it and you need that finer resolution, just set the whole scan to it.  I don’t need a fine section of their nose, but stop playing with the knobs and giving me a weird block of cheese.  

Now what to do?

“Duplicate the slices so that you fill in the gaps and have the finer resolution” – have you seen what that looks like in 3 dimensions.  No – you haven’t, because it doens’t look correct.

“Interpolate the data and make it a uniform thickness” – I’ll just average your eye and frontal cortex – looks nice.  This isn’t a bad idea – but I think staying with rawer data is better.

“Well just take out the duplicate slices” – throwing out data is not what I’m into. Not the worst option though.

“Don’t use that scan” – get out, no seriously, out.

3) Gantry tilt.  I didn’t know I was supposed to hate gantry tilt, partially because I didn’t know what that was.  The paper above describes that they do this to help shield vital organs (like your eyes – not your brain) from radiation.  Pretty much you shoot X-rays in a different way that isn’t perpendicular/orthogonal to your head.  

Now if you’re saying they just acquire the scan at  “a funny angle”, I’d say to you “It’s behind you Tyrone. Whenever you reverse, things come from behind you.” http://www.imdb.com/title/tt0208092/quotes.  

Anyway – looking at a scan that was acquired with a bad gantry tilt pretty much looks like one of the cone heads walked into the scanner.  The sagittal view of an axially acquired scan looks like you stretched the posterior of my head with a plunger.

Ok – what to do – there are some simple transforms to cram your head back to normal: http://www.mathworks.com/matlabcentral/fileexchange/28141-gantry-detector-tilt-correction.  Thank you for such scripts.  

Are they perfect?  What a terrible leading question – of course not.  Images are acquired in a FOV (field of vision) and if you rotate the head, you may push some data (like your frontal cortex) out of it.  There are some shifting you can do to try to combat this.

4) DICOM Header information is a plethora of information…and not de-identified enough.  Pretty much they are “metadata” (https://en.wikipedia.org/wiki/Metadata), or information about how this data was acquired. DICOM headers have very vital information, such as the actual degree of gantry tilt described above, the date/time the scan was acquired, what type of scan it is.  It has information specific to each slice (of brain/cheese), which can be read with some well-established tools (DCMTK is good: http://dicom.offis.de/dcmtk.php.en).  

But with all that information, different hospitals like to cram a bunch of identifiable markers in there, and not always in standard places.  Patient’s names, doctor’s names, birthdays, OTHER patient names, peoples phone numbers, their favorite colors, hobbies, etc.  This can become troublesome for hungry hungry HIPAA.  

5) There is a light at the end of the tunnel.  The data is rich, very common, and it’s cheaper to acquire than MRI.  “Hey I went the ICU” – CT scan, car accident – CT scan, brother hits you with a rock – CT scan.  

 

Hopefully I’ll have another post about how we can hurdle these challenges.  Then maybe that sandwich.

 

Advertisements

Shiny App for looking at Models

How many times do you hear “That model looks good, but what happens if you add/take out this variable”?  I’ve heard it one too many times and I finally have tools to combat this problem.

Introducing my first “out there” Shiny App:

https://github.com/muschellij2/Shiny_model

The app allows you to toggle on/off a set of predictors, and select from a list of outcomes, and presents the GLM of that (hopefully with correct interpretation of estimates).  If you want more families, it shouldn’t be hard.  It also shows you the generalized added variable plot from `car` package, so you can look at your heart’s desire for non-linearity in your predictors.

Steps to use (in terminal)

git clone https://github.com/muschellij2/Shiny_model.git

(or just download server.R ui.R)

setwd(“DIRECTORY those files are in”)

require(shiny)

runApp()

That’s it!  I have loaded up some mock data set in there that mimicked what I was working on, so make sure you don’t think I added real data.  Let me know if you like it (I’m not adding more features at this time – just a work in progress).  If you want to learn more, check out http://www.rstudio.com/shiny/ and their great tutorials.