Open-source AI can be defined as software engineers collaborating on various artificial intelligence projects that are open to the public to develop. The goal is to better integrate computing with humanity. In early March, the open source community got their hands on Meta’s LLaMA which was leaked to the public. In barely a month, there are very innovative OpenSource AI model variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc.
Open-source models are faster, more customizable, more private, and capable. They are doing things with $100 and 13B params that even market leaders are struggling with. One open-source solution, Vicuna, is an… More.
This article explores AI in the context of open-sourced alternatives and highlights market dynamics in play.
Russia claims that its S-350 Vityaz air defence system shot down a Ukrainian aircraft while operating in “automatic mode”. The Russian Deputy PM said that its highly acclaimed S-350 Vityaz air defence system was operating in the NVO zone. It demonstrated capabilities of autonomously detecting, tracking, and destroying Ukrainian air targets without any operator’s intervention. Watch the video to find out how did the system work on AI?
Science educator Bill Nye joins CNN’s Jim Acosta to explain the significance of the detection of phosphorus in salty ice grains on Saturn’s moon Enceladus.
A team of researchers successfully constructed nanofiltration membranes with superior quality using the mussel-inspired deposition methods. Such was achieved via a two-part approach to fabricate the thin-film composite (TFC) nanofiltration membranes. Firstly, the substrate surface was coated through fast and novel deposition to form a dense, robust, and functional selective layer. Then, the structure controllability of the selective layer was enhanced by optimizing the interfacial polymerization (IP) process. As a result, the properties of nanofiltration membranes produced are with high durability and added functionality. When put into a bigger perspective, these high-performance TFC nanofiltration membranes are potential solutions to a number of fields, including water softening, wastewater treatment, and pharmaceutical purification. Hence, there is a need to further explore and expand the application in an industrial scale instead of being bound within the walls of the laboratories.
Membrane-based technologies, especially enhanced nanofiltration systems, have been highly explored due to their myriad of distinct properties, primarily for their high efficiency, mild operation, and strong adaptability. Among these, the TFC nanofiltration membranes are favoured for their smaller molecular weight cutoff, and narrower pore size distribution which lead to higher divalent and multivalent ion rejection ability. Moreover, these membranes show better designability owing to their thin selective layer make-up and porous support with different chemical compositions. However, the interfacial polymerization (IP) rate of reaction is known to affect the permeability and selectivity of the TFC nanofiltration membranes by weakening the controllability of the selective layer structure. Therefore, this study was designed to improve the structural quality of the TFC nanofiltration membranes through surface and interface engineering, and subsequently, increase the functionality.
Many everyday tasks can fall under the mathematical class of “hard” problems. Typically, these problems belong to the complexity class of nondeterministic polynomial (NP) hard. These tasks require systematic approaches (algorithms) for optimal outcomes. In the case of significant complex problems (e.g., the number of ways to fix a product or the number of stops to be made on a delivery trip), more computations are required, which rapidly outgrows cognitive capacities.
A recentScience Advancesstudy investigated the effectiveness of three popular smart drugs, namely, modafinil (MOD), methylphenidate (MPH), and dextroamphetamine (DEX), against the difficulty of real-life daily tasks, i.e., the 0–1 knapsack optimization problem (“knapsack task”). A knapsack task is basically a combinatorial optimization task, the class of NP-time challenging problems.
The compelling feature of this new breed of quasiparticle, says Pedram Roushan of Google Quantum AI, is the combination of their accessibility to quantum logic operations and their relative invulnerability to thermal and environmental noise. This combination, he says, was recognized in the very first proposal of topological quantum computing, in 1997 by the Russian-born physicist Alexei Kitaev.
At the time, Kitaev realized that non-Abelian anyons could run any quantum computer algorithm. And now that two separate groups have created the quasi-particles in the wild, each team is eager to develop their own suite of quantum computational tools around these new quasiparticles.
Meta AI researchers have moved a step forward in the field of generative AI for speech with the development of Voicebox. Unlike previous models, Voicebox can generalize to speech-generation tasks that it was not specifically trained for, demonstrating state-of-the-art performance.
Voicebox is a versatile generative system for speech that can produce high-quality audio clips in a wide variety of styles. It can create outputs from scratch or modify existing samples. The model supports speech synthesis in six languages, as well as noise removal, content editing, style conversion, and diverse sample generation.
Traditionally, generative AI models for speech required specific training for each task using carefully prepared training data. However, Voicebox adopts a new approach called Flow Matching, which surpasses diffusion models in performance. It outperforms existing state-of-the-art models like VALL-E for English text-to-speech tasks, achieving better word error rates (5.9% vs. 1.9%) and audio similarity (0.580 vs. 0.681), while also being up to 20 times faster. In cross-lingual style transfer, Voicebox surpasses YourTTS by reducing word error rates from 10.9% to 5.2% and improving audio similarity from 0.335 to 0.481.
Stella Vita is the World’s first ever solar powered campervan capable of a staggering 600 Km on a single charge! Aptly described as a “self-sustaining house on wheels” it comes kitted out with a double bed, sofa, kitchen area, a shower, sink and toilet! This could just be the perfect way to go off-grid…! Robert went to meet the engineers at Eindhoven University of Technology to see it for himself.
0:00 A solar powered campervan?! 1:20 A 3000Km road trip. 3:55 Better than the back of a Tesla. 4:38 Back to Uni. 6:40 600Km of range. 7:12 Everything is lightweight. 8:51 Experimental but comfortable. 9:44 Key design elements. 10:43 Built in this room. 11:35 Robert makes his bid. 12:02 Arriving in Tarifa. 12:50 Can we buy one? 13:30 Bobby’s outro.
[Prof. Marvin Minsky] is a very well-known figure in the field of computing, having co-founded the MIT AI lab, published extensively on AI and computational intelligence, and, let’s not forget, inventing the confocal microscope and, of course, the useless machine. But did you know he also was a co-developer of the first Logo “turtle,” and developed a computer intended to run Logo applications in an educational environment? After dredging some PDP-10 tapes owned by the MIT Media Lab, the original schematics for his machine, the Turtle Terminal TT2500 (a reference to the target price of $2500, in 1970 terms), are now available for you to examine.
The machine itself was created in an interesting way; by affixing discrete socketed TTL chips to a large panel, some three hundred or so, the interconnect was performed automatically using a computer-controlled wiring machine that read the design from magnetic tape. The 2,500 used 16-bit user-definable instructions read from a tiny 4k control store. Instruction microcode was read from a 1k microcode store backed up with 64k of RAM. Unusually, it sported a dual display configuration, with one text display and a second vector display for rendering real-time graphics. The machine was intended to run the Logo programming language developed by [Seymour Papert] and others, but this was impossible due to its tiny control store. Instead, it became a display terminal for a connected computer with sufficient resources. You can read more about this fascinating period of time in AI, the life of [Minsky], and others in this New Yorker article.