top of page

Note: Please Open Site on Desktop For Best Results

UltraSonic:

Ultrasound Skin Deformation Diagnoses

Eliminating Uncertainty Under the Surface

Project Description

The motivation for this code is to provide a quantitative method for clinicians to determine the progression of skin diseases and their treatments, as there is no such quantitative method currently available.

01

Plot

The first step in this process is having the user select a region of interest in the ultrasound video to track. The program then takes this region of interest and turns it into a matrix of points which the program will track.

02

Track

The next step is to track the motion of the plotted points as the skin moves due to a known applied force from one frame to the next in the video. This is done using functions in OpenCV, a library of functions designed for real time computer vision.

03

Calculate

The final step is to calculate meaningful data that can be used by the clinician such as strain. This is done by taking the tracked distance that the points moved and using it to calculate strain, then plotting the strain field to locate areas of high and low strain.

Project Description
Annotation 2019-10-31 140430.png

Meet UltraSonic

Junior_Roster_Photo.jpg
Jake Catalano

Jake is a senior Mechanical Engineering student. He is also pursuing his Master's Degree in Mechanical Engineering with concentrations in medical devices and manufacturing. He is the captain of the Stevens cross country and track & field teams. In addition, he is a member of the Pi Tau Sigma honor society.

IMG-5075.jpg
Mark Anthony Demetillo

Mark Anthony is currently a senior Mechanical Engineering student and is planning to obtain his Master's of Engineering in Mechanical Engineering concentrated in Pharmaceutical Manufacturing. In addition to being a Pinnacle Scholar and Society of Military Engineers (SAME) Scholar, Mark Anthony is a proud member of the Pi Tau Sigma and Tau Beta Pi honor societies. Some of his previous work experience includes automation of graphene exfoliation and generating anatomically-accurate brain and brain structure models to investigate cerebral atrophy.

0.jfif
Jack Moss

Jack is currently a 4/4 Mechanical Engineering student, pursuing a career path in the medical device industry. Jack is also the Vice President of Tau Beta Pi New Jersey Alpha chapter engineering honor society, as well as, a brother of Pi Tau Sigma Mechanical Engineering Honor society.

IMG_1027.jpg
Richard Rossbach

Richard is a senior Mechanical Engineering student. He is also pursuing a Master's Degree in Mechanical Engineering with a concentration in medical devices. He is a member of the Tau Beta Pi Honor Society and is a Pinnacle Scholar.

3.png
Chenxin Xu

Chenxin is a senior Mechanical Engineering student. She did study abroad at the University of New South Wales, Sydney, Australia. She will continue to pursue a Master's Degree after graduation.

jojo.jpg
Johannes Weickenmeier, PhD

Prof. Johannes Weickenmeier  is the adivsor for this project and he completed his PhD at the Swiss Federal Institute of Technology in Zurich in 2015 and worked as a postdoc in the group of Prof. Ellen Kuhl at Stanford University. His research includes the experimental and computational characterization of soft tissues with a specific interest in the skin and brain. His current work focuses on our fundamental understanding of mechanobiological properties and mechanisms in the healthy and aging brain, as well as coupled multi-field formulations for the spread of neurodegenerative diseases, such as in Alzheimer’s disease and chronic traumatic encephalopathy.

Meet UltraSonic

Concept Design and Selection

Concept 1: Tracking Set Key Points

Concept1a.png
Concept1b.png

User-specified points or regions of interest are tracked and deformation of each point is calculated, reporting changes through color variation.

Concept 2: Algorithm-Generated Key Point Tracking

Concept2.png

Computer vision is used to automatically generate points to track based on characteristics of the video being analyzed.

Concept 3: Dense Optical Flow

Concept3.png

Motion of every pixel in the frame of the video is tracked and vector fields representing movement are drawn.

Concept 4: Color Hue Tracking

Concept4a.png

Moving points on the ultrasound are colored based on flow towards or away from a transducer.

Concept 5: Electrical Signal Measurements Via Electrode Pads

Concept5.png

Current measurements from applied voltages are taken of deformed and undeformed skin and signals can be generated on the ultrasound to report strain.

Concept Selection

Concept Selection.png
Concpt Design and Selection
Technical Analysis and Design

Technical Analysis and Design

Pseudo Code for One Point Tracking

Mark's Pseudo Code.png

Pseudo Code for ROI Tracking

Screen Shot 2019-12-09 at 12.58.26 AM.pn

As shown in the figures above, pseudo code demonstrates the plan of action to be executed in layman's terms. Indentations demonstrate functions to be executed within loops (while loops, for loops, etc.). 

Strain

strain.PNG

From the deformation outputted by the software, strain fields can be generated as long as the initial length of the measured area is known. From the strain fields, doctors will be able to make conclusions about the skin's heath.

Control Flow/Block Diagram

SDBlockDiagram.PNG

Block Diagrams are a great way to show how a code will flow through its operations. UltraSonic's code begins with uploading a video or image file and then allowing a physician to select the area to be analyzed. The option of then determining how many points are to be tracked is given, and from the selected region, those points will be made. Next, the code will track the points frame by frame, and a vector field showing the deformation and progress of the points is created. Operators will then be given the option to look at their results and decide if they would like to continue with the calculations, plot new points, or redo the analysis with more points for greater detailed analyses. The code will then plot the deformation, read out important stress and strain values, and give the physician its determination on whether the skin is healthy or not.

Program Time Complexity

The time complexity of the program can be described using something known as Big-O notation, which is a measure of how much time or space an algorithm is going to take. As our program is based off of the Lucas-Kanade algorithm for optical flow, the time complexity should be near identical. This complexity is well documented and can be seen below, the steps refer to each step of the Lucas-Kanade algorithm. The total Big-O expression for each iteration is O(n^(2)*N+n^3). This means that the time the program takes increases cubically depending on the size of the input, and increases squared with respect to the number of iterations of the algorithm. In the function, n refers to the number of warp parameters, while  N is the number of pixels. However, N is characteristically much larger than n (N>>n), so the time complexity function simplifies down to O(n^(2)*N).

big_O LK.PNG
Big O.png
Project Management

Project Management

UltraSonic strives to provide a quality product for its customers. To do this a strict schedule and budget is followed to ensure the best possible software.

Gantt1SD.png
Updated Gantt.png

Scheduling helps us at UltraSonic remain on schedule for the launch of our new software. Milestones are set to help the team stay focused and determined in meeting our deadlines. The three main milestones include deadlines for plotting the key points to be tracked (November 11th), tracking the key points in time for the alpha prototype (November 29th), and calculating the strain from the data (March 31st). The completed and refined code will then be showcased at the Stevens Innovation Expo in May of 2020. Several other update reports and presentations are also worked into our schedule to keep the public updated on the project. Updates will also be posted in the "Progress and Updates" page on our website. The images above show the Phase 1-3 Gantt Chart (left) and the updated Phase 4-6 Gantt Chart (right) due to the COVID-19 pandemic.

Updated Budget.png

Keeping a budget helps our team monitor its spending and remain on track to complete our product. Failing to follow the budget can result in delays and unforeseen issues not accounted for in our schedule.The team has remained in the budget of $700 as no purchases needed to be made due to the software based nature of the project. The budget including the bill of materials can be seen to the left.

Progress and Updates

Progress and Updates

Update 2: Point Tracking with Webcam

Slightly modifying the code from Update 1, instead of uploading a video or image, the webcam from a computer can be used to track anything on the screen. Selecting the tip of a team member's fingerprint, the movement of the finger could be followed. This serves as a proof of concept that the code can successfully point track the correct point. The code is successfully able to follow the finger throughout the duration of the code's run-time.

Update 1: Point Tracking on Video File

UltraSonic has converted the pseudo code for one point tracking into actual code in the Python coding language. A video demonstration can be found to the left, where it was run on a sample ultrasound. This ultrasound video is the application of stress on the skin, which will be the main basis  of determining the mechanical properties of the skin.

Update 3: ROI Tracking On Video File

Instead of tracking one point shown in Update 1, this code can select a region of interest in a rectangular shape. The module allows using different types of trackers. The video shown to the left demonstrates how KCF (Kernelized Correlation Filters), one of the tracker’s types, works. When running the program, the code allows the user to determine which frame they want to start, and then to select the region of interest. The tracking result will be shown in the window, “track result”. After running the code, the coordination of the center of the region will be collected and saved to a text file.  

Update 4: ROI+Lucas-Kanade Tracking 

comparison.png

In this model, the user can select a region of interest (ROI), and then the program will generate the most prominent corners in the selected ROI by using feature detection function in OpenCV. After determining the ROI and the prominent points in this region, the program will start to track the ROI by a KCF (Kernelized Correlation Filter) tracker and by the Lucas-Kanade optical flow method. ROI tracking and Lucas-Kanade tracking are the same as mentioned above. In this model, the program inputs a new function used to automatically select points to track instead of selecting manually. Considering the multiple selections may confuse the users, after the user selects their region, the system will automatically determine the most prominent points. To realize this process, the program adds a function named ‘cv2.goodFeaturesToTrack’. In this function, there are several parameters to determine the selection criteria. The parameter, maxConcers, is used to limit the maximum number of points to be selected in the region. Then, qualityLevel characterizes the minimal accepted quality of the image corners. The parameter minDistance can be used to determine the minimum possible distance between the return corners. Also, the function contains the mask parameter, which realizes that the points will be only selected in the ROI. Blocksize is used to define the size of an average block for computing a matrix over each pixel neighborhood. Moreover, there is an additional detector embodied in this function, named Harris detector. By defining the parameter useHarrisDetector and related k value, the goodFeaturesToTrack can also use Harris detector.  

 

The image on the right shows an example by using this model to track the skin deformation. In this instance, there are six points tracked, four of which are connected to demonstrate the deformation of the skin. As shown above, the quadrilateral without points is the original shape of a particular region of the skin. Another quadrilateral with points shows the result after pulling the skin. The comparison of the shapes before and after pulling the skin shows how the skin deformed under this condition.  
  

Update 5: Grid Tracking with GUI

The improvements made upon the software have been incorporated to produce a user-defined grid, where the number of rows and columns are selected after video uploading. These points are then tracked using the methodology described in the previous update. A graphical user interface (GUI) has also been established for ease of product use, as displayed in the video to the right.

Update 6: Shape Altering Testing

To determine the accuracy of both the grid tracking and feature tracking methods, the scenario of a circle deforming into an ellipse was employed. The video to the left shows the feature tracking method tracking the deformation of the ellipse. It also shows how the data was collected. In the bottom left of the frame the coordinates of the points are being displayed. Because the distance that the vertex of the ellipse traveled was known in pixel lengths, the data exported by the software was analyzed to compare the known distance. The same test was performed for the grid tracking method as well. The results of the analysis showed that the tracking softwares were accurate, a known distance of 325 pixels compared to the software yielded distance of 324 pixels after a scaling factor of 1.3 was applied to account for video compression.

Update 7: Shear Image Testing

In order to test if the software was able to accurately calculate the strain values after tracking the skin deformation in an ultrasound video, testing was done using a sheared landscape image. The image the team used as the 'control' can be seen to the right. A simple program was created in MATLAB to shear an image to a known angular deformation. A video of this image shearing was then created and analyzed by the grid tracking method of the software. The calculations were broken down by elements (the squares seen in the image). The actual shear strain was 0.46, and the average shear strain that was calculated by the software was 0.486. This is a percent error of 5.68%. This can be expected to improve when smaller regions of interest are chosen, and with further refinement of the tracking subsystem of the code.

ih8mylyfe.PNG

Update 8: GUI Implementation And Final  Product

The implementation of a graphical user interface (GUI) gives the final software a more professional feel. It also makes the software much easier to navigate and increases the functionality. Final adjustments were made to implement the product into the GUI and as seen to the right, the final product functions smoothly and efficiently. The GUI allows the user to select a file from their computer, import it into the software, and track the desired region of interest. From here, additional software creates a strain field of the desired region, showing regions of compression (red) and tension (blue), allowing practitioners to make important conclusions about the patient's skin health. Updated strain data throughout the tracking is visible on the rightmost panel and can be exported for further analysis.

© 2023 by UltraSonic. Proudly created with Wix.com

bottom of page