Tissue Specimens and Hyperspectral Imaging for Colorectal Cancer Detection: A Deep Dive into a New Framework

Hold onto your hats, folks, because we’re about to dive into the fascinating world where cutting-edge tech meets good ol’ fashioned medicine. That’s right, we’re talkin’ about using hyperspectral imaging and, get this, deep learning, to detect colorectal cancer. It’s like something straight outta a sci-fi movie, but trust me, this is the real deal.

The What and the Why

This study’s on a mission, fam. The goal? To create a super sophisticated deep learning framework that can spot colorectal cancer using hyperspectral images of, you guessed it, tissue specimens. And here’s the kicker: this framework’s rocking a totally new D-D methodology (it’s a secret, shh!). This bad boy combines the best of both the D and D deep learning worlds, making it crazy scalable and giving us the power to balance speed and accuracy like a pro.

Getting Down to the Nitty-Gritty: Materials and Methods

Okay, so you know how scientists are all about the details? Well, buckle up buttercup, because here’s where things get technical.

Ethical Considerations and Tissue Acquisition

First things first, ethics. Before we even think about poking around tissues, we gotta make sure everything’s on the up and up. This study got the green light from the Institutional Review Board (IRB) at the University of South Alabama. That means they’re playing by the rules.

Now, about those tissues. This study’s using leftover, de-identified tissue specimens from standard surgical procedures. Don’t worry, they’re not snatching tissues all willy-nilly. And since the specimens are de-identified, patient privacy is A-okay. Plus, the IRB said it’s cool to use these leftover specimens without getting extra consent. Ethics? Check!

Tissue Processing

So, we’ve got our ethically-sourced tissues, now what? Time to get them prepped for their close-up!

Here’s the play-by-play:

  • Step one: Grab those fresh specimens straight from the operating room and send them over to surgical pathology. You know, just to make sure everything looks good under the microscope.
  • Step two: Separate the tumor tissue from the normal tissue. We gotta keep ’em separated like Destiny’s Child, you feel me? Each type gets its own little container filled with PBS (that’s phosphate-buffered saline, for all you non-science nerds). Then, it’s off to a nice, chilly bath at four degrees Celsius.
  • Step three: Give those tissues a quick rinse with more PBS. Next, it’s time to break out the tiny knives and dice those specimens into even tinier cubes. Finally, carefully mount those little guys onto coverslips, ready for their big debut under the hyperspectral microscope.

Hyperspectral Imaging System

Alright, y’all, this is where the real magic happens. We’re talkin’ about a custom-built, excitation-scanning hyperspectral imaging microscope platform. Yeah, it’s a mouthful, but trust me, it’s pretty darn cool.

Picture this:

  • An inverted widefield microscope base (Nikon Eclipse TE -U) – think of it as the stage for our tiny tissue stars.
  • A x objective lens (Nikon Plan Apo λ ∞/. MRD) – because bigger is always better, right? Well, in microscopy, it’s more about magnification and clarity.
  • Not one, but two cameras! A back-illuminated EMCCD camera (Q-Imaging Rolera em-c) and a back-illuminated sCMOS camera (Teledyne Photometrics Prime B) for some extra special shots. These bad boys capture all the juicy details.
  • A broadband Xenon arc lamp (Sunoptics Titan ) – gotta have some fancy lighting for our microscopic photoshoot.
  • A tiltable filter wheel (Sutter Instruments VF-) with tunable filters (Semrock VersaChrome) – this is where we get to play with colors and wavelengths, capturing the full spectrum of tissue awesomeness.
  • A liquid light guide – because even light needs a little guidance sometimes.
  • And last but not least, a dichroic beamsplitter and emission filter (Semrock) – these guys help us separate the good light from the bad light, ensuring crystal-clear images.

So there you have it, folks. That’s how we’re using hyperspectral imaging and deep learning to tackle colorectal cancer detection. Stay tuned for part two, where we’ll delve into the nitty-gritty of image acquisition, data wrangling, and the star of the show – the D-D deep learning framework itself!

Image Acquisition and Preprocessing

Time to get our hands dirty (not literally, of course) with some image acquisition! Our hyperspectral microscope, that beautiful Frankensteinian creation we just discussed, captures images across a range of wavelengths, from a cool 360 nanometers to a smooth 550 nanometers, in 5-nanometer increments. That’s 38 different wavelength bands, my friends, each revealing a different aspect of our tiny tissue samples.

For each of these wavelength bands, the filter wheel spins like a roulette wheel of science, selecting the perfect wavelength of light to shine on our sample. The cameras, those digital paparazzi, snap away, capturing the subtle variations in light reflected back from the tissue.

Here’s the lowdown on our camera settings:

  • EMCCD: 14-bit, 2×2 binning, resulting in images with a resolution of 501×502 pixels.
  • sCMOS: 16-bit, 2×2 binning, capturing a slightly larger field of view with 600×600 pixels. This camera was used for a smaller subset of images.

We’re not talking about just one or two snapshots, though. Oh no, we’re talking multiple fields of view (FOV) per specimen. Think of it like taking panoramic pictures of a city, but instead of a city, it’s a tiny piece of tissue, and instead of a camera, it’s a super high-tech microscope. You get the picture.

Now, before we unleash the power of deep learning on this treasure trove of images, we need to do a little tidying up. This is where image preprocessing swoops in to save the day. First, we use a spectrometer, an integrating sphere, and a calibration lamp to correct for any variations in spectral response. Basically, we’re making sure all the colors are true to life. Next, we apply some computational magic, including:

  • Linear compression to resize all those massive image volumes down to a more manageable 500x500x38 pixels. Think of it like compressing a zip file, but instead of documents, it’s images of cells.
  • Histogram equalization and normalization. This step ensures that the brightness and contrast are consistent across all images, making it easier for our deep learning algorithms to do their thing. We’re talking a range of 0-1, represented as a 32-bit floating point number. Don’t worry too much about the technical jargon; just know that it’s important.

With our images all prepped and ready to go, we now have a dataset of 104 lesional (that’s the fancy word for “cancerous”) and 112 non-lesional FOVs. It’s time to unleash the algorithms!

2D-3D Deep Learning Framework: Where the Magic Really Happens

Alright, folks, gather ’round because this is where things get really interesting. We’ve got our preprocessed images, and now it’s time to feed them to the hungry beast that is our 2D-3D deep learning framework. This is where the real magic of this study lies.

But first, a little data augmentation. You see, deep learning algorithms are like picky eaters. They need a lot of data to learn effectively. So, to give our algorithms a little more to chew on, we use spatial rotations. We’re talking 90 degrees plus or minus 45 degrees, 180 degrees plus or minus 45 degrees, you get the idea. We’re basically showing the algorithm the same image from different angles, like a microscopic kaleidoscope. We use linear interpolation to fill in any gaps, because we’re all about precision here.

Next up, we extract regions of interest (ROIs), because who wants to look at an entire image when you can focus on the juicy bits? We experiment with different ROI sizes, from a tiny 50×50 pixels to a whopping 250×250 pixels. We randomly place these ROIs within the image volume to keep things interesting and avoid bias. And to make sure we’re not drowning in redundant data, we adjust the number of ROIs based on the size. It’s all about finding that sweet spot between too much information and not enough.

Dimensionality Reduction (PCA): Simplifying the Complex

Remember those 38 wavelength bands we talked about earlier? Yeah, that’s a lot of data to process, even for our super-powered deep learning algorithms. So, to make things a little easier, we use a technique called Principal Component Analysis (PCA). This bad boy takes all those spectral bands and squishes them down into a smaller number of principal components (PCs), kind of like squeezing a bunch of grapes into a single glass of juice. We evaluate how much variance each PC explains to figure out the optimal number to keep. Too few, and we lose important information. Too many, and we’re back to square one with the data overload.

We put our convolutional neural networks (CNNs) to the test, comparing their performance with and without PCA. We experiment with 3, 8, 16, 32 PCs, and even all 38 bands, just to be thorough. And of course, we adjust the architecture of our neural networks based on the image size and the number of PCs we decide to keep. It’s all about finding that perfect balance between simplicity and accuracy.

Deep Neural Network Models: The Brains of the Operation

Now, for the main event! We’re talking about the deep neural networks (DNNs), the brains of our operation. We’re putting four different architectures to the test, two existing and two newly converted using our innovative 2D-3D methodology:

  • 3D-CihanNet: This bad boy is already designed for 3D data, so it’s ready to rumble right out of the gate.
  • 2D-ResNet50: Another heavy hitter, but this one’s used to dealing with 2D images. No worries, we’ve got a plan for that.
  • 2D-CihanNet: We took the original CihanNet and gave it a 2D-3D makeover, because why not?
  • 3D-ResNet50: Same deal with ResNet50. We converted this 2D champ into a 3D powerhouse using our special sauce.

Converting 2D models to 3D is no walk in the park, but our 2D-3D methodology makes it look easy. We carefully adjust the convolutional layers to handle the extra dimension, tweaking kernel sizes along the way. And when it comes to max-pooling layers, we’re not afraid to omit them if they threaten to introduce negative information channels. We’re all about keeping things positive and informative.