- Erik Ziegler Merge current dev branches, then the rest of these PRs go into dev
Outstanding Pull Requests
Cornerstone
CornerstoneTools
dicomParser
Cornerstone WADO Image Loader
Cornerstone Web Image Loader
Issue | Reason | Plan |
---|
gh-pages branch still exists |
| Delete branch, not necessary anymore |
Open Issues
Cache Size Issue
If the cache size is set to 1GB and you load 1GB of data cornerstone will consume at least 2GB. That's because the dataset is cached by dataSetCacheManager and also a color/grayscale image will be created for each frame using a closure keeping the imageData (bytes) allocated in the memory (see getPixelData, getImageData and getCanvas methods). The problem is because imageCache doesn't count the images (frame) size but just the dataset size. Then if you load a 100MB dataset that contains 100 frames it will store 100MB (dataset) + 100 frames (1MB each approx). The size of the images created by wadouri image loader is set to the dataset size and we don’t know the real image size what makes it difficult to know how many MB there are in the cache.
Three ways to fix this issue:
- The dataset should have a different structure, storing only the header bytes (removing the pixel data) and the frames converted into a color/grayscale images instead of the raw pixel data. Then each image would re-use the same frame data instead of creating a copy of it. Cornerstone uses the dataset in too many places and maybe this will not be so easy to change.
- All caches (dataSetCacheManager, imageCache, etc) should add an entry to a Global Cache responsible for counting and removing items from these caches (like datasetCacheManager and imageCache do through decache method). This way we should be able to count dataset + image + anything else sizes. And also, to avoid removing the dataset before its images from the cache each cache entry could have a priority (property) then sorting by priority/timeStamp would solve this problem.
- The image should not create a copy of the bytes but creates the color/grayscale image on-the-fly whenever getCanvas is called. This process saves memory but results in a performance impact because these images would have to be re-created too many times (scroll / cine).
- Erik Ziegler Put this on the OHIF roadmap
- Figure out a way to make the Cache manager 100% accurate for how much memory Cornerstone is using.
- Look into color images
- Look into compressed images
- Look into CINE usage and cache involvement
RequestPoolManager Improvements
- Erik Ziegler Put this on the OHIF roadmap
Allow parallel downloads when loading multi frame instance
If a multi frame series starts loading using wadouri all frame promises will be added to the requestPoolManager but all threads will waiting for the same dataset. If you open Chrome DevTools and try to prefetch a different series (we developed a StudyPrefetcher on OHIF) you will see that only the first one will be in progress. The second series will be downloaded only when the first one is done and all its promises are resolved.
- Abort a download when its promise is rejected.
When a 100MB series is being loaded and user clicks on another series he/she will have to wait for the first one to be loaded before starting loading the second one what can result in a delay on UI impacting the user experience.
General Issues
- ES6 transition should start soon so that new code is cleaner
- Erik Ziegler Put this on OHIF roadmap: Start with dicomParser
- Axe the Meteor package, move towards NPM
- Get rid of bower and use NPM for everything
- Chris Hafey (Unlicensed) Move repo from /chafey/ to /OHIF/
- Check naming conventions for NPM
- Viewers / Cornerstone should use requestAnimationFrame loop to display images
- Multi-layer composition work is ongoing