Archive for the ‘media & arts’ category: Page 3

Jan 28, 2023

Beyond Human: A Billion Years of Evolution and the Fate of Our Species

Posted by in categories: evolution, media & arts, space

Our lifespans might feel like a long time by human standards, but to the Earth it’s the blink of an eye. Even the entirety of human history represents a tiny slither of the vast chronology for our planet. We often think about geological time when looking back into the past, but today we look ahead. What might happen on our planet in the next billion years?

Written and presented by Prof David Kipping, edited by Jorge Casas.

Continue reading “Beyond Human: A Billion Years of Evolution and the Fate of Our Species” »

Jan 28, 2023

My Mind was Blown. AI Music is INSANE! — Google’s NEW MusicLM AI

Posted by in categories: media & arts, robotics/AI

I wonder if musicians should be worried.

Google Research introduces MusicLM, a model that can generate high-fidelity music from text descriptions. See how MusicLM casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task, and how it outperforms previous systems in audio quality and text description adherence. Learn more about MusicCaps, a dataset composed of 5.5k music-text pairs, and see how MusicLM can be conditioned on both text and a melody. Check out this video to see the power of MusicLM: Generating Music From Text! #GoogleResearch #MusicLM #MusicGeneration.

Continue reading “My Mind was Blown. AI Music is INSANE! — Google’s NEW MusicLM AI” »

Jan 27, 2023

Google created an AI that can generate music from text descriptions, but won’t release it

Posted by in categories: media & arts, robotics/AI

An impressive new AI system from Google can generate music in any genre given a text description. But the company, fearing the risks, has no immediate plans to release it.

Called MusicLM, Google’s certainly isn’t the first generative AI system for song. There have been other attempts, including Riffusion, an AI that composes music by visualizing it, as well as Dance Diffusion, Google’s own AudioML and OpenAI’s Jukebox. But owing to technical limitations and limited training data, none have been able to produce songs particularly complex in composition or high-fidelity.

MusicLM is perhaps the first that can.

Continue reading “Google created an AI that can generate music from text descriptions, but won’t release it” »

Jan 26, 2023

MusicLM: Generating Music From Text abs: project page

Posted by in category: media & arts

Jan 25, 2023

Music-based interventions: exploring the neural basis and establishing reproducibility in an emerging field

Posted by in categories: media & arts, neuroscience

Drs Emmeline Edwards and Wen Chen discuss music-based interventions, their impacts, techniques used to understand them and more!

Jan 23, 2023

AI art — automation. A working artist’s take

Posted by in categories: media & arts, robotics/AI

Also some stories from my childhood. Art as a live service. I’m wrong a lot so maybe I’m wrong about this stuff.

Music: “Un coin tranquille — Instrumental Version” by Nono feat. Anat Moshkovski.
I got it on Artlist (like most of the music on the channel) which is a royalty free library that I understand pays their artists pretty well.

Jan 22, 2023

Microsoft’s New AI Can Clone Your Voice in Just 3 Seconds

Posted by in categories: media & arts, robotics/AI

AI is being used to generate everything from images to text to artificial proteins, and now another thing has been added to the list: speech. Last week researchers from Microsoft released a paper on a new AI called VALL-E that can accurately simulate anyone’s voice based on a sample just three seconds long. VALL-E isn’t the first speech simulator to be created, but it’s built in a different way than its predecessors—and could carry a greater risk for potential misuse.

Most existing text-to-speech models use waveforms (graphical representations of sound waves as they move through a medium over time) to create fake voices, tweaking characteristics like tone or pitch to approximate a given voice. VALL-E, though, takes a sample of someone’s voice and breaks it down into components called tokens, then uses those tokens to create new sounds based on the “rules” it already learned about this voice. If a voice is particularly deep, or a speaker pronounces their A’s in a nasal-y way, or they’re more monotone than average, these are all traits the AI would pick up on and be able to replicate.

The model is based on a technology called EnCodec by Meta, which was just released this part October. The tool uses a three-part system to compress audio to 10 times smaller than MP3s with no loss in quality; its creators meant for one of its uses to be improving the quality of voice and music on calls made over low-bandwidth connections.

Continue reading “Microsoft’s New AI Can Clone Your Voice in Just 3 Seconds” »

Jan 22, 2023

Hooks and Earworms: What Makes Pop Songs So Catchy?

Posted by in category: media & arts

Summary: Researchers explore why some songs constantly get stuck in our heads and why these “hooks” are the guiding principle for modern popular music.

Source: University of Wollongong.

“Hey, I just met you, and this is crazy… But here’s my number, so call me, maybe.”

Jan 22, 2023

Artificial Gravity Network: HEXATRACK-Space Express Connecting Lunar & Mars Glass City and Beyond

Posted by in categories: media & arts, space, virtual reality

00:00 — Lunar Glass & Lunar Vehicle (Music: Lunar City)
05:01-HEXATRACK-Space Express Concept (Music: Constellation)
10:30 — Mars Glass, Dome City, and Martian Terra Forming (Music: Martian)
13:54 — Beyond — Proxima Centauri, Tau Cet e, TRAPPIST-I system, and beyond (Music: Neptune) HEXATRACK-Space Express Concept, designed and created by Yosuke A. Yamashiki, Kyoto University.
Lunar Glass & Mars Glass, designed and created by Takuya Ono, Kajima Co. Ltd.
Visual Effect and detailed design are generated by Juniya Okamura.
Concept Advisor Naoko Yamazaki, AstronautSIC Human Spaceology Center, GSAIS, Kyoto UniversityVR of Lunar&Mars Glass — created by Natsumi Iwato and Mamiko Hikita, Kyoto University.
VR contents of Lunar&Mars Glass by Shinji Asano, Natsumi Iwato, Mamiko Hikita and Junya Okamura.
Daidaros concept by Takuya Ono.
Terraformed Mars were designed by Fuka Takagi & Yosuke A. Yamashiki.
Exoplanet image were created by Ryusuke Kuroki, Fuka Takagi, Hiroaki Sato, Ayu Shiragashi and Y. A. Yamashiki.
All Music (” Lunar City” “Constellation”“Martian”“Neptune”) are composed and played by Yosuke Alexandre Yamashiki.

Jan 19, 2023

All in the Mind: Decoding Brainwaves to Identify the Music We Are Listening To

Posted by in categories: media & arts, robotics/AI

Summary: Combining neuroimaging and EEG data, researchers recorded the neural activity of people while listening to a piece of music. Using machine learning technology, the data was translated to reconstruct and identify the specific piece of music the test subjects were listening to.

Source: University of Essex.

A new technique for monitoring brain waves can identify the music someone is listening to.

Page 3 of 7812345678Last