Toggle light / dark theme

How to Prepare for a GenAI Future You Can’t Predict

Given the staggering pace of generative AI development, it’s no wonder that so many executives are tempted by the possibilities of AI, concerned about finding and retaining qualified workers, and humbled by recent market corrections or missed analyst expectations. They envision a future of work without nearly as many people as today. But this is a miscalculation. Leaders, understandably concerned about missing out on the next wave of technology, are unwittingly making risky bets on their companies’ futures. Here are steps every leader should take to prepare for an uncertain world where generative AI and human workforces coexist but will evolve in ways that are unknowable.

Page-utils class= article-utils—vertical hide-for-print data-js-target= page-utils data-id= tag: blogs.harvardbusiness.org, 2007/03/31:999.361032 data-title= How to Prepare for a GenAI Future You Can’t Predict data-url=/2023/08/how-to-prepare-for-a-genai-future-you-cant-predict data-topic= Strategic planning data-authors= Amy Webb data-content-type= Digital Article data-content-image=/resources/images/article_assets/2023/08/Aug23_31_1500235907-383x215.jpg data-summary=

A framework for making plans in the midst of great uncertainty.

Sam Altman Says He Intends to Replace Normal People With AI

That’s one way to talk about other human beings.

As writer Elizabeth Weil notes in a new profile of OpenAI CEO Sam Altman in New York Magazine, the powerful AI executive has a disconcerting penchant for using the term “median human,” a phrase that seemingly equates to a robotic tech bro version of “Average Joe.”

Altman’s hope is that artificial general intelligence (AGI) will have roughly the same intelligence as a “median human that you could hire as a co-worker.”

Will AI make us crazy?

Coverage of the risks and benefits of AI have paid scant attention to how chatbots might affect public health at a time when depression, suicide, anxiety, and mental illness are epidemic in the United States. But mental health experts and the healthcare industry view AI mostly as a promising tool, rather than a potential threat to mental health.

Meta putting AI in smart glasses, assistants and more

People will laugh and dismiss it and make comparisons to googles clown glasses. But around 2030 Augmented Reality glasses will come out. Basically, it will be a pair of normal looking sunglasses w/ smart phone type features, Ai, AND… VR stuff.


Meta chief Mark Zuckerberg on Wednesday said the tech giant is putting artificial intelligence into digital assistants and smart glasses as it seeks to gain lost ground in the AI race.

Zuckerberg made his announcements at the Connect developers conference at Meta’s headquarters in Silicon Valley, the company’s main annual product event.

“Advances in AI allow us to create different (applications) and personas that help us accomplish different things,” Zuckerberg said as he kicked off the gathering.

Tim Cook confirms Apple is researching ChatGPT-style AI

Apple CEO Tim Cook has told UK press that the company is “of course” working on generative AI, and that he expects to hire more Artificial intelligence staff in that country.

Just hours after Apple put a spotlight on how it supports over half a million jobs in the UK, Tim Cook has been talking about increasing that by hiring more staff working in AI.

According to London’s Evening Standard, Cook was asked by the PA news agency about AI and hiring in the UK. Cook said: “We’re hiring in that area, yes, and so I do expecting [recruitment] to increase.”

Autonomous Racing Drones Are Starting To Beat Human Pilots

Even with all the technological advancements in recent years, autonomous systems have never been able to keep up with top-level human racing drone pilots. However, it looks like that gap has been closed with Swift – an autonomous system developed by the University of Zurich’s Robotics and Perception Group.

Previous research projects have come close, but they relied on optical motion capture settings in a tightly controlled environment. In contrast, Swift is completely independent of remote inputs and utilizes only an onboard computer, IMU, and camera for real-time for navigation and control. It does however require a pretrained machine learning model for the specific track, which maps the drone’s estimated position/velocity/orientation directly to control inputs. The details of how the system works is well explained in the video after the break.

The paper linked above contains a few more interesting details. Swift was able to win 60% of the time, and it’s lap times were significantly more consistent than those of the human pilots. While human pilots were often faster on certain sections of the course, Swift was faster overall. It picked more efficient trajectories over multiple gates, where the human pilots seemed to plan one gate in advance at most. On the other hand human pilots could recover quickly from a minor crash, where Swift did not include crash recovery.

One hour of training is all you need to control a third robotic arm

A new study by researchers at Queen Mary University of London, Imperial College London and The University of Melbourne has found that people can learn to use supernumerary robotic arms as effectively as working with a partner in just one hour of training.

The study, published in the IEEE Open Journal of Engineering in Medicine and Biology, investigated the potential of supernumerary robotic arms to help people perform tasks that require more than two hands. The idea of human augmentation with additional artificial limbs has long been featured in science fiction, like in Doctor Octopus in The Amazing Spider-Man (1963).

“Many tasks in , such as opening a door while carrying a big package, require more than two hands,” said Dr. Ekaterina Ivanova, lead author of the study from Queen Mary University of London. “Supernumerary robotic arms have been proposed as a way to allow people to do these tasks more easily, but until now, it was not clear how easy they would be to use.”

Artificial Intelligence Improves Brain Tumor Diagnosis

Neurosurgeons can leave the operating room more confident today than ever before about their patient’s brain tumor diagnosis, thanks to the integration of a new system that employs optical imaging and artificial intelligence that are making brain tumor diagnosis quicker and more accurate. This technology is allowing them to quickly see diagnostic tissue and tumor margins in near-real time.

For the full story, visit: http://michmed.org/gk8ZD

Learn more about the scientific breakthroughs happening at Michigan Medicine, visit: https://labblog.uofmhealth.org/

Follow Michigan Medicine on Social:

Twitter: https://twitter.com/umichmedicine.
Instagram: https://www.instagram.com/umichmedicine/
Facebook: https://www.facebook.com/MichiganMedicine/

/* */