Toggle light / dark theme

Can Humans Be Replaced by Machines?

“Genius Makers” and “Futureproof,” both by experienced technology reporters now at The New York Times, are part of a rapidly growing literature attempting to make sense of the A.I. hurricane we are living through. These are very different kinds of books — Cade Metz’s is mainly reportorial, about how we got here; Kevin Roose’s is a casual-toned but carefully constructed set of guidelines about where individuals and societies should go next. But each valuably suggests a framework for the right questions to ask now about A.I. and its use.


Two new books — “Genius Makers,” by Cade Metz, and “Futureproof,” by Kevin Roose — examine how artificial intelligence will change humanity.

Collective Superintelligence Summit (Early Bird)

Your Survival Depends On All Of Us — Support Open Sourcing Collective Superintelligence Basically, the point of the summit is Artificial Superintelligence or ASI is coming eventually. There are groups of organizations discussing the existential risk that ASI poses to humanity. Even if we only develop an AGI, AGI will still create ASI and we lose control at some point. Supporting the Open Sourcing of Collective Superintelligent systems is our only hope for keeping up and moves us forward before other technologies outpace our ability to keep up. Please support our Summit and help decide how to open source a version of the mASI (mediated Artificial Superintelligence) system, and the creation of a community-driven effort to make these systems better and better. Attendance helps to raise enough money to cover the costs of support services, cloud infrastructure, and the digital resources needed to get this open-source project up, covering publishing and support costs, while also making people aware of it. Papers and formal thinking also are really needed. This particular field of collective intelligence is poorly represented in terms of scientific papers and we hope this project can bring more prominence to this possibility of helping humanity become more than what we are and strong enough to contain AGI while we ourselves are able to become smarter and move to full digitization of humanity for those that want it. Then we can contain ASI safely and embrace the singularity. Please help, save yourself and humanity by support the Collective Superintelligence Conference. Sign up and attend here:


This is the early bird sign-up for the virtual summit held June 4th from 6 am PST to 4 pm PST via Zoom and Youtube. Speakers and Panelists, and workshops will be held in Zoom, and streaming will be done via Youtube.

Who is Running the Summit:

This summit is run in conjunction with BICA (Biologically Inspired Cognitive Architecture) Society and the AGI Laboratory, and The Foundation.

Robotic Assistance Devices to Integrate EAGL Gunshot Detection Technology into All Security Devices

HENDERSON, Nev.—()—Artificial Intelligence Technology Solutions, Inc. (OTCPK: AITX), today announced that its wholly-owned subsidiary Robotic Assistance Devices (RAD) has entered into an agreement with EAGL Technology, Inc. to offer EAGL’s Gunshot Detection System (GDS) in all present and foreseeable future RAD devices.

“We have been receiving repeated requests that gunshot detection capabilities be built into RAD devices from industries as varied as transit operators, retail property managers, and law enforcement. Integrating EAGL’s technology into RAD’s autonomous response solutions should be well received by all of the markets we serve” Tweet this

EAGL Technology was established in 2015 after acquiring gunshot ballistic science developed by the Department of Energy (DOE) Pacific Northwest National Laboratory (PNNL). EAGL has advanced this technology by creating a state-of-the-art security system. The EAGL product offering utilizes the company’s patented FireFly® Ballistic Sensor technology which RAD will offer, as an integrated option, on all mobile and stationary security solutions. EAGL clients include Honeywell, Johnson Controls, Siemens and many more.

AI-controlled vertical farm produces 400 times more food per acre than a flat farm

Dedicated to those who argue that life extension is bad because it will create overpopulation problems. In adittion to the fact that natality rates are dangerously decreasing in some developed countries, this is only one example of changes that may will take place well before life extension may create a problem of such type, if ever.


Plenty, an ag-tech startup in San Francisco co-founded by Nate Storey, has been able to increase its productivity and production quality by using artificial intelligence and its new farming strategy. The company’s farm farms take up only 2 acres yet produce 720 acres worth of fruit and vegetables. In addition to their impressive food production, they also manage the production with robots and artificial intelligence.

The company says their farm produces about 400 times more food per acre than a traditional farm. It uses robots and AI to monitor water consumption, light, and the ambient temperature of the environment where plants grow. Over time, the AI learns how to grow crops faster with better quality.

While this is great for food quality, it also helps conserve resources. The water is recycled and evaporated water recaptured so there is virtually no waste. The Startup estimates that this smart farm is so efficient that it produces better fruits and vegetables using 95% less water and 99% less land than normal farming operations.

More Than Words: Using AI to Map How the Brain Understands Sentences

Summary: Combining neuroimaging data with artificial intelligence technology, researchers have identified a complex network within the brain that comprehends the meaning of spoken sentences.

Source: university of rochester medical center.

Have you ever wondered why you are able to hear a sentence and understand its meaning – given that the same words in a different order would have an entirely different meaning?

Expressing some doubts: Comparative analysis of human and android faces could lead to improvements

Researchers from the Graduate School of Engineering and Symbiotic Intelligent Systems Research Center at Osaka University used motion capture cameras to compare the expressions of android and human faces. They found that the mechanical facial movements of the robots, especially in the upper regions, did not fully reproduce the curved flow lines seen in the faces of actual people. This research may lead to more lifelike and expressive artificial faces.

The field of robotics has advanced a great deal in recent decades. However, while current androids can appear very humanlike at first, their active facial expressions are still unnatural and unsettling to people. The exact reasons for this effect have been difficult to pinpoint. Now, a research team at Osaka University has used motion capture technology to monitor the facial expressions of five android faces and compared the results with actual human facial expressions. This was accomplished with six infrared cameras that monitored reflection markers at 120 frames per second and allowed the motions to be represented as three-dimensional displacement vectors.

“Advanced artificial systems can be difficult to design because the numerous components have with each other. The appearance of an android face can experience surface deformations that are hard to control,” study first author Hisashi Ishihara says. These deformations can be due to interactions between components such as the soft skin sheet and the skull-shaped structure, as well as the mechanical actuators.