Menu

Blog

Archive for the ‘robotics/AI’ category: Page 974

Dec 16, 2022

I want machines to write as fluently as humans

Posted by in category: robotics/AI

What if artificial intelligence could help an aspiring author write a novel? Or coach people to improve the quality of their writing? Could machines learn how to make jokes? Inspired by these questions, computer scientist Jiao Sun has been exploring the potential of AI-generated text as a PhD candidate at the University of Southern California (USC).

After a four-month internship at Alexa AI last spring, she is now starting her journey as an Amazon Machine Learning Fellow for the 2022–23 academic year and hopes to continue developing text-generation models that enhance the interaction between humans and AI.

While Sun is passionate about the potential of natural language generation, she also believes it’s important to develop tools that improve human control over machine-created content. She is also cautiously optimistic about the surge in popularity surrounding text generation models.

Dec 16, 2022

Your Creativity Won’t Save Your Job From AI

Posted by in category: robotics/AI

Robots were once considered capable only of unimaginative, routine work. Today they write articles and create award-winning art.

Dec 16, 2022

Breakthrough of the Year

Posted by in categories: innovation, robotics/AI

JWST makes a spectacular debut, AI gets creative, giant bacteria surprise, and the year’s other big advances in science.

Dec 16, 2022

Rick And Morty Creator Used Controversial AI Art, Voice Acting In New Shooter

Posted by in categories: entertainment, robotics/AI

High on Life co-creator Justin Roiland explained AI art makes the game feel like a ‘strange alternate dimension’.

Dec 15, 2022

UberEats is rolling out a fleet of self-driving delivery robots in Miami

Posted by in categories: robotics/AI, transportation

Uber and Cartken have announced a partnership to introduce Miami to a fleet of tiny autonomous delivery vehicles.

Dec 15, 2022

Exclusive: ChatGPT owner OpenAI projects $1 billion in revenue by 2024

Posted by in categories: finance, robotics/AI

The forecast, first reported by Reuters, represents how some in Silicon Valley are betting the underlying technology will go far beyond splashy and sometimes flawed public demos.

OpenAI was most recently valued at $20 billion in a secondary share sale, one of the sources said. The startup has already inspired rivals and companies building applications atop its generative AI software, which includes the image maker DALL-E 2. OpenAI charges developers licensing its technology about a penny or a little more to generate 20,000 words of text, and about 2 cents to create an image from a written prompt, according to its website.

A spokesperson for OpenAI declined to comment on its financials and strategy. The company, which started releasing commercial products in 2020, has said its mission remains advancing AI safely for humanity.

Dec 15, 2022

The Rise of OpenAI With Sam Altman (OpenAI CEO) | Moonshots and Mindsets

Posted by in categories: health, robotics/AI

In this episode, filmed in early 2021, Sam and Peter discuss the creation of OpenAI and GPT-3, what the future of OpenAI will bring to humanity, and the power of AI/Human collaboration.

Sam Altman is the Co-Founder and CEO of OpenAI and former president of Y Combinator. OpenAI is an artificial intelligence research laboratory that creates programs, such as GPT-3, ChatGPT, and DALL-E 2, for the benefit of humanity.

Continue reading “The Rise of OpenAI With Sam Altman (OpenAI CEO) | Moonshots and Mindsets” »

Dec 15, 2022

NVIDIA’s New AI: Video Game Graphics, Now 60x Smaller!

Posted by in category: robotics/AI

❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers.
❤️ Their mentioned post is available here: http://wandb.me/variable-bitrate.

📝 The paper “Variable Bitrate Neural Fields” is available here:
https://nv-tlabs.github.io/vqad/

Continue reading “NVIDIA’s New AI: Video Game Graphics, Now 60x Smaller!” »

Dec 15, 2022

NVIDIA Researchers Present ‘RANA,’ a Novel Artificial Intelligence Framework for Learning Relightable and Articulated Neural Avatars of Humans

Posted by in categories: biotech/medical, entertainment, robotics/AI

Human-like articulated neural avatars have several uses in telepresence, animation, and visual content production. These neural avatars must be simple to create, simple to animate in new stances and views, capable of rendering in photorealistic picture quality, and simple to relight in novel situations if they are to be widely adopted. Existing techniques frequently use monocular films to teach these neural avatars. While the method permits movement and photorealistic image quality, the synthesized images are constantly constrained by the training video’s lighting conditions. Other studies specifically address the relighting of human avatars. However, they do not provide the user control over the body stance. Additionally, these methods frequently need multiview photos captured in a Light Stage for training, which is only permitted in controlled environments.

Some contemporary techniques seek to relight dynamic human beings in RGB movies. However, they lack control over body posture. They need a brief monocular video clip of the person in their natural location, attire, and body stance to produce an avatar. Only the target novel’s body stance and illumination information are needed for inference. It is difficult to learn relightable neural avatars of active individuals from monocular RGB films captured in unfamiliar surroundings. Here, they introduce the Relightable Articulated Neural Avatar (RANA) technique, which enables photorealistic human animation in any new body posture, perspective, and lighting situation. It first needs to simulate the intricate articulations and geometry of the human body.

The texture, geometry, and illumination information must be separated to enable relighting in new contexts, which is a difficult challenge to tackle from RGB footage. To overcome these difficulties, they first use a statistical human shape model called SMPL+D to extract canonical, coarse geometry, and texture data from the training frames. Then, they suggest a unique convolutional neural network trained on artificial data to exclude the shading information from the coarse texture. They add learnable latent characteristics to the coarse geometry and texture and send them to their proposed neural avatar architecture, which uses two convolutional networks to produce fine normal and albedo maps of the person underneath the goal body posture.

Dec 15, 2022

Skin bioprinting: the future of burn wound reconstruction?

Posted by in categories: 3D printing, bioengineering, bioprinting, biotech/medical, robotics/AI

In addition to laser-assisted bioprinting, other light-based 3D bioprinting techniques include digital light processing (DLP) and two-photon polymerization (TPP)-based 3D bioprinting. DLP uses a digital micro-mirror device to project a patterned mask of ultraviolet (UV)/visible range light onto a polymer solution, which in turn results in photopolymerization of the polymer in contact [56, 57]. DLP can achieve high resolution with rapid printing speed regardless of the layer’s complexity and area. In this method of 3D bioprinting, the dynamics of the polymerization can be regulated by modulating the power of the light source, the printing rate, and the type and concentrations of the photoinitiators used. TPP, on the other hand, utilizes a focused near-infrared femtosecond laser of wavelength 800 nm to induce polymerization of the monomer solution [56]. TPP can provide a very high resolution beyond the light diffraction limit since two-photon absorption only happens in the center region of the laser focal spot where the energy is above the threshold to trigger two-photon absorption [56].

The recent development of the integrated tissue and organ printer (ITOP) by our group allows for bioprinting of human scale tissues of any shape [45]. The ITOP facilitates bioprinting with very high precision; it has a resolution of 50 μm for cells and 2 μm for scaffolding materials. This enables recapitulation of heterocellular tissue biology and allows for fabrication of functional tissues. The ITOP is configured to deliver the bioink within a stronger water-soluble gel, Pluronic F-127, that helps the printed cells to maintain their shape during the printing process. Thereafter, the Pluronic F-127 scaffolding is simply washed away from the bioprinted tissue. To ensure adequate oxygen diffusion into the bioprinted tissue, microchannels are created with the biodegradable polymer, polycaprolactone (PCL). Stable human-scale ear cartilage, bone, and skeletal muscle structures were printed with the ITOP, which when implanted in animal models, matured into functional tissue and developed a network of blood vessels and nerves [45]. In addition to the use of materials such as Pluronic F-127 and PCL for support scaffolds, other strategies for improving structural integrity of the 3D bioprinted constructs include the use of suitable thickening agents such as hydroxyapatite particles, nanocellulose, and Xanthan and gellan gum. Further, the use of hydrogel mixtures instead of a single hydrogel is a helpful strategy. For example, the use of gelatin-methacrylamide (GelMA)/hyaluronic acid (HA) mixture instead of GelMA alone shows enhanced printability since HA improves the viscosity of mixture while crosslinking of GelMA retains post-printing structural integrity [58].

To date, several studies have investigated skin bioprinting as a novel approach to reconstruct functional skin tissue [44, 59,60,61,62,63,64,65,66,67]. Some of the advantages of fabrication of skin constructs using bioprinting compared to other conventional tissue engineering strategies are the automation and standardization for clinical application and precision in deposition of cells. Although conventional tissue engineering strategies (i.e., culturing cells on a scaffold and maturation in a bioreactor) might currently achieve similar results to bioprinting, there are still many aspects that require improvements in the production process of the skin, including the long production times to obtain large surfaces required to cover the entire burn wounds [67]. There are two different approaches to skin bioprinting: in situ bioprinting and in vitro bioprinting. Both these approaches are similar except for the site of printing and tissue maturation. In situ bioprinting involves direct printing of pre-cultured cells onto the site of injury for wound closure allowing for skin maturation at the wound site. The use of in situ bioprinting for burn wound reconstruction provides several advantages, including precise deposition of cells on the wound, elimination of the need for expensive and time-consuming in vitro differentiation, and the need for multiple surgeries [68]. In the case of in vitro bioprinting, printing is done in vitro and the bioprinted skin is allowed to mature in a bioreactor, after which it is transplanted to the wound site. Our group is working on developing approaches for in situ bioprinting [69]. An inkjet-based bioprinting system was developed to print primary human keratinocytes and fibroblasts on dorsal full-thickness (3 cm × 2.5 cm) wounds in athymic nude mice. First, fibroblasts (1.0 × 105 cells/cm2) incorporated into fibrinogen/collagen hydrogels were printed on the wounds, followed by a layer of keratinocytes (1.0 × 107 cells/cm2) above the fibroblast layer [69]. Complete re-epithelialization was achieved in these relatively large wounds after 8 weeks. This bioprinting system involves the use of a novel cartridge-based delivery system for deposition of cells at the site of injury. A laser scanner scans the wound and creates a map of the missing skin, and fibroblasts and keratinocytes are printed directly on to this area. These cells then form the dermis and epidermis, respectively. This was further validated in a pig wound model, wherein larger wounds (10 cm × 10 cm) were treated by printing a layer of fibroblasts followed by keratinocytes (10 million cells each) [69]. Wound healing and complete re-epithelialization were observed by 8 weeks. This pivotal work shows the potential of using in situ bioprinting approaches for wound healing and skin regeneration. Clinical studies are currently in progress with this in situ bioprinting system. In another study, amniotic fluid-derived stem cells (AFSCs) were bioprinted directly onto full-thickness dorsal skin wounds (2 cm × 2 cm) of nu/nu mice using a pressure-driven, computer-controlled bioprinting device [44]. AFSCs and bone marrow-derived mesenchymal stem cells were suspended in fibrin-collagen gel, mixed with thrombin solution (a crosslinking agent), and then printed onto the wound site. Two layers of fibrin-collagen gel and thrombin were printed on the wounds. Bioprinting enabled effective wound closure and re-epithelialization likely through a growth factor-mediated mechanism by the stem cells. These studies indicate the potential of using in situ bioprinting for treatment of large wounds and burns.

Page 974 of 2,416First971972973974975976977978Last