Toggle light / dark theme

For example, a video of a swinging pendulum would look the same if you played it backward. We see time as irreversible because of another law of nature, the second law of thermodynamics. This law says that the disorder in a system always increases. If the broken glass reassembled itself, the disorder would decrease.

The same law applies to the aging of materials. But physicists from Darmstadt have found out that this is not the case. They have discovered that the motion of molecules in glass or plastic can be reversed in time if you look at it from a special angle.

Albert Einstein was one smart cookie; there’s no doubt about it. But even he knew his general theory of relativity – the 21st century’s answer to Newton’s universal theory of gravity – wasn’t perfect.

Like the second-hand car you bought using your first paycheck, it does the job for day-to-day errands. Push it too hard up a steep hill or park it near a quantum strip mall, and that engine shudders to a standstill.

Peoples’ Friendship University of Russia astrophysics grad student Hamidreza Fazlollahi’s solution is to dive under the hood and see which components aren’t as essential as they seem.

For example, the New York Times states: “The AI industry this year is set to be defined by one main characteristic: A remarkably rapid improvement of the technology as advancements build upon one another, enabling AI to generate new kinds of media, mimic human reasoning in new ways and seep into the physical world through a new breed of robot.”

Ethan Mollick, writing in his One Useful Thing blog, takes a similar view: “Most likely, AI development is actually going to accelerate for a while yet before it eventually slows down due to technical or economic or legal limits.”

The year ahead in AI will undoubtedly bring dramatic changes. Hopefully, these will include advances that improve our quality of life, such as the discovery of life saving new drugs. Likely, the most optimistic promises will not be realized in 2024, leading to some amount of pullback in market expectations. This is the nature of hype cycles. Hopefully, any such disappointments will not bring about another AI winter.

AI tools like ChatGPT can draft letters, tell jokes and even give legal advice – but only in the form of computerized text.

Now, scientists have created an AI that can imitate human handwriting, which could herald fresh issues regarding fraud and fake documents.

Amazingly, the results are almost indistinguishable from the real thing drafted by human hands.

Companies like OpenAI and Midjourney have opened Pandora’s box, opening them up to considerable legal trouble by training their chatbots on the vastness of the internet while largely turning a blind eye to copyright.

As professor and author Gary Marcus and film industry concept artist Reid Southen, who has worked on several major films for the likes of Marvel and Warner Brothers, argue in a recent piece for IEEE Spectrum, tools like DALL-E 3 and Midjourney could land both companies in a “copyright minefield.”

It’s a heated debate that’s reaching fever pitch. The news comes after the New York Times sued Microsoft and OpenAI, alleging it was responsible for “billions of dollars” in damages by training ChatGPT and other large language models on its content without express permission. Well-known authors including “Game of Thrones” author George RR Martin and John Grisham recently made similar arguments in a separate copyright infringement case.

A new, potentially revolutionary artificial intelligence framework called “Blackout Diffusion” generates images from a completely empty picture, meaning that the machine-learning algorithm, unlike other generative diffusion models, does not require initiating a “random seed” to get started. Blackout Diffusion, presented at the recent International Conference on Machine Learning (“Blackout Diffusion: Generative Diffusion Models in Discrete-State Spaces”), generates samples that are comparable to the current diffusion models such as DALL-E or Midjourney, but require fewer computational resources than these models.

“Generative modeling is bringing in the next industrial revolution with its capability to assist many tasks, such as generation of software code, legal documents and even art,” said Javier Santos, an AI researcher at Los Alamos National Laboratory and co-author of Blackout Diffusion. “Generative modeling could be leveraged for making scientific discoveries, and our team’s work laid down the foundation and practical algorithms for applying generative diffusion modeling to scientific problems that are not continuous in nature.”

A new generative AI model can create images from a blank frame. (Image: Los Alamos National Laboratory)

The Times said OpenAI and Microsoft are advancing their technology through the “unlawful use of The Times’s work to create artificial intelligence products that compete with it” and “threatens The Times’s ability to provide that service”


The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.

The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit, filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.