In the news recently, the US Copyright Office partially rescinded copyright protections for an article containing exclusively AI generated art. It was a landmark decision that is likely just the beginning of a long legal and ethical debate around the role, ethics, and rights of Artificial Intelligence in today’s global society — and tomorrow’s interplanetary one.
AI artworks are currently being denied copyright protection because copyrights only protect human generated work, and in the Copyright Office’s current opinion, the “artist” does not exert enough creative control over the output of the program (i.e., just using a written prompt to generate an image does not constitute a copyrightable work, as the program generated it, not the human involved). At least some AI generated images are considered to have enough human “involvement” to be copyrightable, but more direct working with the imagery is required.
How does this work for the parent when they have a birth certificate but no baby to show for it, and no record of “disposing” of it?
FORT LAUDERDALE, Fla. (AP) — Safe Haven Baby Boxes and A Safe Haven for Newborns are two charities with similar names and the same goal: providing distressed mothers with a safe place to surrender their unwanted newborns instead of dumping them in trash cans or along roadsides.
But a fight between the two is brewing in the Florida Senate. An existing state law, supported and promoted by the Miami-based A Safe Haven, allows parents to surrender newborns to firefighters and hospital workers without giving their names. A new bill, supported by the Indiana-based Safe Haven Baby Boxes, would give fire stations and hospitals the option to install the group’s ventilated and climate-controlled boxes, where parents could drop off their babies without interacting with fire or hospital employees.
Remark: This article is from The Conversation “En Anglais” written by Victor DOS SANTOS PAULINO & Nonthapat PULSIRI (V&N) — Experts from Toulouse Business School and The SIRIUS Chair (France)
When talking about space, one might think about the stars one sees at night or a good sci-fi film. But space is also crowded with satellites, spacecrafts and astronauts, whose missions can last anywhere from several days to months. Meanwhile, 8,216 unmanned satellites revolve around Earth’s orbits to improve our daily lives. Communication satellites contribute to enhancing Internet access in regions deprived of infrastructure (so-called “white areas”); meteorology satellites have become essential for weather forecasts, while navigation satellites (including GPS) are crucial for current and future transportation needs such as automatic driving vehicles.
Technological advances in the sector have unlocked many new business opportunities. The industry can now launch constellations of thousand satellites to reach corners of the earth as it had never before (e.g., Starlink), while new markets such as space mining and space tourism are steadily growing. National champions (including the United States and France) have also framed the space sector as a top economic priority. It is thought the technological benefits accrued by companies such as SpaceX, Blue Origin or OneWeb, launched by billionaires such as Elon Musk, will also be able to trickle down to non-space sectors such as the energy or freight industries.
There is a new catchphrase that some are using when it comes to talking about today’s generative AI. I am loath to repeat the phrase, but the angst in doing so is worth the chances of trying to curtail the usage going forward.
Are you ready?
Some have been saying that generative AI such as ChatGPT is so-called alien intelligence. Hogwash. This kind of phrasing has to be stopped. Here’s the reasons to do so.
Human memory has been shown to be highly fallible in recent years, but a new study on short term memory recall indicates that we can get details wrong within seconds of an event happening.
It has long been shown that human memory is highly fallible, with even ancient legal codes requiring more than one witness to corroborate accounts of a crime or events, but a new study reveals that people can create false memories within a second of the event being recalled.
The study, published this week in PLOS One, had hundreds of volunteers over the course of four experiments look at a sequence of letters and asked them to recall a single highlighted letter that they had been shown. In addition, some of the highlighted letters were reversed, meaning the respondent needed to recall that as well.
But maybe the future of these models is more focused than the boil-the-ocean approach we’ve seen from OpenAI and others, who want to be able to answer every question under the sun.
The amazing abilities of OpenAI’s ChatGPT wouldn’t be possible without large language models. These models are trained on billions, sometimes trillions of examples of text. The idea behind ChatGPT is to understand language so well, it can anticipate what word plausibly comes next in a split second. That takes a ton of training, compute resources and developer savvy to make happen.