Imagine this: a round, plump robot, like a giant bowling ball, that can roll on land, swim in water, and perform all sorts of high-tech operations. On October 9th, a team of scientists from Zhejiang University unveiled something called the RT-G spherical robot, claiming it’s a \.
Category: humor
Breaking oxygen out of a water molecule is a relatively simple process, at least chemically. Even so, it does require components, one of the most important of which is a catalyst. Catalysts enable reactions and are linearly scalable, so if you want more reactions quickly, you need a bigger catalyst. In space exploration, bigger means heavier, which translates into more expensive. So, when humanity is looking for a catalyst to split water into oxygen and hydrogen on Mars, creating one from local Martian materials would be worthwhile. That is precisely what a team from Hefei, China, did by using what they called an “AI Chemist.”
Unfortunately, the name “AIChemist” didn’t stick, though that joke might vary depending on the font you read it in. Whatever its name, the team’s work was some serious science. It specifically applied machine learning algorithms that have become all the rage lately to selecting an effective catalyst for an “oxygen evolution reaction” by utilizing materials native to Mars.
To say it only chose the catalyst isn’t giving the system the full credit it’s due, though. It accomplished a series of steps, including developing a catalyst formula, pretreating the ore to create the catalyst, synthesizing it, and testing it once it was complete. The authors estimate that the automated process saved over 2,000 years of human labor by completing all of these tasks and point to the exceptional results of the testing to prove it.
We’ve all been there. Moments after leaving a party, your brain is suddenly filled with intrusive thoughts about what others were thinking. “Did they think I talked too much?” “Did my joke offend them?” “Were they having a good time?”
In a new Northwestern Medicine study, scientists sought to better understand how humans evolved to become so skilled at thinking about what’s happening in other peoples’ minds. The findings could have implications for one day treating psychiatric conditions such as anxiety and depression.
“We spend a lot of time wondering, ‘What is that person feeling, thinking? Did I say something to upset them?’” said senior author Rodrigo Braga. “The parts of the brain that allow us to do this are in regions of the human brain that have expanded recently in our evolution, and that implies that it’s a recently developed process. In essence, you’re putting yourself in someone else’s mind and making inferences about what that person is thinking when you cannot really know.”
RALEIGH, N.C. — Particle physicist Hitoshi Murayama admits that he used to worry about being known as the “most hated man” in his field of science. But the good news is that now he can joke about it.
Last year, the Berkeley professor chaired the Particle Physics Project Prioritization Panel, or P5, which drew up a list of multimillion-dollar physics experiments that should move ahead over the next 10 years. The list focused on phenomena ranging from subatomic smash-ups to cosmic inflation. At the same time, the panel also had to decide which projects would have to be left behind for budgetary reasons, which could have turned Murayama into the Dr. No of physics.
Although Murayama has some regrets about the projects that were put off, he’s satisfied with how the process turned out. Now he’s just hoping that the federal government will follow through on the P5’s top priorities.
Even the biggest investors often make terrible trading decisions for their portfolios.
At an AI summit in Tokyo on Wednesday, Jensen Huang and Masayoshi Son joked about how SoftBank was once Nvidia’s largest shareholder before dumping its stake. The two billionaires are now joining forces on a Japanese supercomputer. SoftBank, which until early 2019 owned 4.9% of Nvidia, has secured a favorable spot in line for the chipmaker’s latest products.\r.
——–\r.
More on Bloomberg Television and Markets\r.
\r.
Like this video? Subscribe and turn on notifications so you don’t miss any videos from Bloomberg Markets \& Finance: https://tinyurl.com/ysu5b8a9\r.
Visit http://www.bloomberg.com for business news \& analysis, up-to-the-minute market data, features, profiles and more.\r.
\r.
Connect with Bloomberg Television on:\r.
X: / bloombergtv \r.
Facebook: / bloombergtelevision \r.
Instagram: / bloombergtv \r.
\r.
Connect with Bloomberg Business on:\r.
X: / business \r.
Facebook: / bloombergbusiness \r.
Instagram: / bloombergbusiness \r.
TikTok: https://www.tiktok.com/@bloombergbusi…\r.
Reddit: / bloomberg \r.
LinkedIn: / bloomberg-news \r.
\r.
More from Bloomberg:\r.
Bloomberg Radio: / bloombergradio \r.
\r.
Bloomberg Surveillance: / bsurveillance \r.
Bloomberg Politics: / bpolitics \r.
Bloomberg Originals: / bbgoriginals \r.
\r.
Watch more on YouTube:\r.
Bloomberg Technology: / @bloombergtechnology \r.
Bloomberg Originals: / @business \r.
Bloomberg Quicktake: / @bloombergquicktake \r.
Bloomberg Espanol: / @bloomberg_espanol \r.
Bloomberg Podcasts: / @bloombergpodcasts
If you’ve recently scrolled through Instagram, you’ve probably noticed it: users posting AI-generated images of their lives or chuckling over a brutal feed roast by ChatGPT. What started as an innocent prompt – “Ask ChatGPT to draw what your life looks like based on what it knows about you” – has gone viral, inviting friends, followers, and even ChatGPT itself to get a peek into our most personal details. It’s fun, often eerily accurate, and, yes, a little unnerving.
The trend that started it all
A while ago, Instagram’s “Add Yours” sticker spurred the popular trend “Ask ChatGPT to roast your feed in one paragraph.” What followed were thousands of users clamouring to see the AI’s take on their profiles. ChatGPT didn’t disappoint – delivering razor-sharp observations on everything from overused vacation spots to the endless brunch photos and quirky captions, blending humour with a dash of truth. The playful roasting felt oddly familiar, almost like a best friend’s inside joke.
Model is featured in figure 5.4 of Visualizing Mathematics with 3D Printing. This is joint work with Keenan Crane.
🧠 Neuromodulation through the eyes 👀
Neuroplasticity, also known as neural plasticity or brain plasticity, is a process that involves adaptive structural and functional changes to the brain.
Founded and directed by Deborah Zelinsky, O.D., F.N.O.R.A., F.C.O.V.D.
Just as with eye-hand coordination, integration of vision and sound – eye-ear connection – must be developed. If the two senses are out of sync, a person can experience difficulties in academics, social situations and activities such as sports.
Prefatory Note: Our usual policy at The Threepenny Review is to assign one book to one author. But in this case two of our longtime writers—P. N. Furbank, an essayist, critic, and biographer who lives in London, and Louis B. Jones, a novelist and essayist who lives in the Sierra foothills—both wanted to review the same book. So we let them. We think the results are instructive: not oppositional, not mutually contradictory, but very different approaches to the same subject. We are also pleased that neither Jones nor Furbank trained as a professional philosopher. (After all, philosophical theories, if they bear on reality, should be meaningful to the rest of us.) So here they are—first Jones, then Furbank—commenting on Thomas Nagel’s Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False, out in the fall of 2012 in both America and England from Oxford University Press.
My stranded trailer in the woods looks onto a clearing where wild sweet pea vies with starthistle, fescue with blue-eye grass and miner’s lettuce, all competing as they’ve done, possibly, since the Sierra first crumbled into soil and started inviting plants to colonize. It is a patch of ground, then, that existed through the geologic ages in the peculiar twilight oblivion of being unwitnessed—until the first Maidu people came along, probably climbing up from the creek below. Before the Maidu, the witnesses of the place were the animals. And now these days I’m here, to substantiate this little clearing’s existence. It’s almost a weary old joke in philosophy, but still a surefire, hard-to-retire joke—that I’m necessary to this clearing’s existence. My mind. The joke, however, is making a serious, small comeback in this century.
A recent study by UC San Diego researchers brings fresh insight into the ever-evolving capabilities of AI. The authors looked at the degree to which several prominent AI models, GPT-4, GPT-3.5, and the classic ELIZA could convincingly mimic human conversation, an application of the so-called Turing test for identifying when a computer program has reached human-level intelligence.
The results were telling: In a five-minute text-based conversation, GPT-4 was mistakenly identified as human 54 percent of the time, contrasted with ELIZA’s 22 percent. These findings not only highlight the strides AI has made but also underscore the nuanced challenges of distinguishing human intelligence from algorithmic mimicry.
The important twist in the UC San Diego study is that it clearly identifies what constitutes true human-level intelligence. It isn’t mastery of advanced calculus or another challenging technical field. Instead, what stands out about the most advanced models is their social-emotional persuasiveness. For an AI to catch (or fool a human) it has to be able to effectively imitate the subtleties of human conversation. When judging whether their interlocutor was an AI or a human, participants tended to focus on whether responses were overly formal, contained excessively correct grammar, or repetitive sentence structures, or exhibited an unnatural tone. Participants flagged stilted or inconsistent personalities or senses of humor as non-human.