With the first round of the Drone Racing League’s Level 1 race finished, eight pilots will compete for a spot in the finals.
BMI’s (according to DARPA and David Axe) could begin as early as 2017 on humans. The plan is to use stentrodes. Testing has already proven success on sheep. I personally have concerns in both a health (as the article highlighted prone to blood clots) as well as anything connecting via Wi-Fi or the net with hackers trying to challenge themselves to prove anything is hackable; that before this goes live on a person we make sure that we have a more secure hack-resistant net before someone is injured or in case could injure someone else.
Soldiers could control drones with a thought.
On Monday at the Mobile World Congress in Barcelona, Mark Zuckerberg partook in what he thought would be a “fireside chat” with Wired’s Jessi Hempel but which was verifiably not fireside, and was, actually, a keynote.
Inverse picked out the best nine moments of this interview.
1.) Zuck doesn’t know that Aquila will meet regulations but is just confident that it’ll work out
Zuck reported that Aquila, Facebook’s casual wifi-beaming, solar-powered drone project, is coming along well. A team is currently constructing the second full-scale drone — which has the wingspan of a 747, is only as heavy as a car, and will be able to stay aloft for as long as six months — and another team is testing large-but-not-full-scale models every week. These drones will transmit high-bandwidth signals via a laser communications system, which, he says, require a degree of accuracy on par with hitting a quarter on the top of the Statue of Liberty with a laser pointer in California. The goal, he added, is to get these drones beaming wifi that’s 10 to 100 times faster than current systems. Facebook will roll out its first full-scale trials later this year, and Zuck expects that within 18 months, Aquila will be airborne.
The advent of 5G is likely to bring another splurge of investment, just as orders for 4G equipment are peaking. The goal is to be able to offer users no less than the “perception of infinite capacity”, says Rahim Tafazolli, director of the 5G Innovation Centre at the University of Surrey. Rare will be the device that is not wirelessly connected, from self-driving cars and drones to the sensors, industrial machines and household appliances that together constitute the “internet of things” (IoT).
It is easy to dismiss all this as “a lot of hype”, in the words of Kester Mann of CCS Insight, a research firm. When it comes to 5G, much is still up in the air: not only which band of radio spectrum and which wireless technologies will be used, but what standards makers of network gear and handsets will have to comply with. Telecoms firms have reached consensus only on a set of rough “requirements”. The most important are connection speeds of up to 10 gigabits per second and response times (“latency”) of below 1 millisecond (see chart).
Personally, I am not a Breitbart fan; however, I am publishing this article to highlight something that I noticed. In this article it highlighted the 3 Rules of Robotics which are old and need to be updated. One of the rules is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” is not true. Why? Because as long as criminals who have enough money and can pay others well to re-engineer/ re-program robotics; robotics can become dangerous to humans. The drones today are good examples of how stalkers are using them, drug cartels, etc.
Robotics, once the almost exclusive purview of science fiction, is now approaching a point at which it will be capable of dramatic influence over humanity. These advancements are as much a lesson in caution as in the wonder of the human imagination.
This article is amusing on killer robots and how governments should address the threat of killer robots on a national level. On a national level if (in my case the US) we were invaded or a whole army of robots landed on the shores of Florida, NY, or CA; then yes Congress would need to approve war, etc. Which is what this article highlights. However, attacking robots will most likely not be the result of an invasion from another country; attacking robot/s will be the result of criminals; etc. that hacked/or reprogrammed the robotics.
Cartels, terrorists, etc. will pay well to have self driving cars, humanoid robots, etc. re-engineered and re-programmed for their own benefits and become a weapon against individuals and the population.
The United Nations’ effort to ban killer robots will fail, but there are three important steps the United States can take to help slow the rise of lethal autonomous weapons systems, one of the most prominent voices in the robotics debate said this week.
Pentagon officials insist they don’t want to allow an autonomous weapon to kill people without a human in the loop, but greater levels of autonomy and artificial intelligence are making their way into more and more pieces of military technology, like in recognizing targets, piloting drones, and driving supply trucks. Defense Department leaders advocate for robotic intelligence and autonomy as thread-reducing (and cost-saving) measures key to securing the United States’ technological advantage over adversaries over the coming decades (the so-called ‘third offset’ strategy). Defense Secretary Ash Carter, Deputy Defense Secretary Bob Work and former Defense Secretary Chuck Hagel have talked up the importance of artificial intelligence to the military’s future plans.
Related: Should We Have Laws to Control Robots Before They Control Us?
“We know we’re going to have to go somewhere with automation,” Air Force Brig. Gen. John Rauch, director of ISR (intelligence, surveillance, and reconnaissance) Capabilities for the Air Force, said at a Tuesday breakfast in Washington sponsored by AFCEA, a technology industry association. Rauch was referring to the rapidly growing demands on human image analysts in the Air Force, especially as additional small drones enter service in the years ahead. “It’s: ‘What level of automation is allowed?’ And then when you start talking about munitions, it becomes a whole nother situation.” The Air Force will be coming out with a flight plan for small unmanned aerial systems, or UAS’s, in the next four months, Lt. Gen. Robert Otto, deputy chief of staff for Intelligence, Surveillance and Reconnaissance, said at the meeting.
There is a need for a larger “official and governmental” review and oversight board for drones, robots, etc. due to the criminal elements; however, any review needs focus more on the immediate criminal elements that can use and is using this technology plus how to best manage it. Like guns; we may see a need for background check and registration & license to have drones and certain robots as a way to better vet and track who can own a drone or robot.
At AAAI-16, a panel discussed the safety that will be necessary when it comes to autonomous manned and unmanned aircraft. Here’s what you need to know.
InVisage has just announced their release of a new Infrared scanner for eye scan security recognition device. Since InVisage also developed and release a new film leveraging Q-Dot technology; the scanner is also leveraging this technology for more accurate readings and imaging.
InVisage’s new image sensor for infrared cameras could help drones avoid trees and could aid virtual reality headsets in seeing where you’re pointing.
A search of the U.S. Patent and Trademark Office’s database shows some recently filed patents by Google. The search engine titan is obviously firing all cylinders in its research of robots/drones, driverless vehicles, and what looks to be either a Google Glass reboot or some sort of stylish frames for Virtual Reality headsets, perhaps.
[Related: Apple and Google Will Lead $600 Million Near-Future Car Market]
Several patents for eye wear that Google simply refers to as “glasses” in the patent abstracts, show more stylish frames than the Google Glass prototype released in 2013.