My interest in generative AI started last year and it continues to grow. As an experiment (and as a follow up to many requests I get in real life) I’ve just started a newsletter for operators (managers, solopreneurs) of small businesses outside of tech industry. The goal is to help them in getting their businesses transformed into AI-first orgs.
Paper: possible non-invasive biomarker for tissue inorganic phosphate levels
Continuing to publish stuff done while still in academia. The PDF of the preprint “The relationship between EMG high-frequency and low-frequency band amplitude changes correlates with tissue inorganic phosphate levels” is available at the Research Square.
Messages from the future: generative AI, patchworking and sci-fi
This is an experiment with generating putative messages/reports from the future with the base GPT-3 model (code-davinci-002) with a bit of Stable Diffusion and Canva.
Algorithmically generated music
Live coding is an element of performance for algorithmically generated music.
Libraries like SuperCollider enable artists to experiment with creating sounds and patterns using computer code. Majority of the music generated this way is interesting at most, but hardly listenable to an average person. Often the experiment has no reseblance of any music quality in it. It doesn’t mean it cannot be pushed further. It’s just that the nature of this type of instrument (the code being the instrument) makes it hard for an average musician to become proficient at it. It requires both, coding skills (Haskell _is_ hard) and musical taste to create something really interesting. Given what people already do with modular synthesizers and beat making hardware it seems that code interface is going to be just less handy and wieldy instrument then those are (search for State Azure on Youtube). Therefore the question becomes what type of music or what approach to composition could use pure code more than modular synthesizers.
I don’t know yet.