If you have complex imaging data that is hard to visualize (e.g., more than 3 color channels, many spatial slices or time points, or high dynamic range data) then you might benefit from turning images into sounds — or: image sonification.
With Pixasonics, you can combine visualization and sonification: You can load image sequences, step through spatial or temporal slices, select areas and listen "through" all slices at once or in a sequence, or quickly compare regions or objects. Want to get creative? Then load any data as an "image" (matrix) and turn it into your instrument. You can spawn several systems in the same Notebook to test the same data in different setups or compare various image sequences with the same sonification pipeline.?
Sharing is easy: As long as your teammates have your images and Pixasonics on their system, they can precisely reproduce your results.?
Do you speak Python? You can easily extend Pixasonics' classes to tailor them to your needs or integrate image sonification into your broader workflow.
Hungry for more info? Have a look at the project repository!
Note that we will work in Python, using Anaconda to create a separate environment for Pixasonics. If you do not have Anaconda installed, you can download Miniconda. (If you prefer to use venv, you are welcome to, but the guidance will cover the conda-way.)
Bring headphones with you! We'll be making some noise.?
Registration
Please sign up so that we know how many to expect.
?