Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1680

Jun 14, 2020

DeepCode learns from GitHub project data to give developers AI-powered code reviews

Posted by in category: robotics/AI

Often, code reviews involve collaborations between the original code authors, their peers, and managers, with a view toward finding obvious errors before it gets to a more advanced phase. And the bigger a project is, the more lines of code there are to review, which is a time-consuming process. There are options out there for analyzing source code for errors, such as static analysis tool Lint, but these are often not holistic in terms of their scope — they’re focused on a smaller, targeted set of “annoying and repeatable stylistic issues, formatting and minor issues,” according to Paskalev.

DeepCode’s selling point is that it covers a broader range of problems, including vulnerabilities such as cross-site scripting and SQL injection, while it also promises to establish the intent behind the code, rather than spotting simple syntax mistakes. Underpinning all this is machine learning (ML) systems, which are trained using billions of lines of code from public open source projects, which constantly learn and update their knowledge base.

Though DeepCode can ingest code from any source code repositories, Paskalev told VentureBeat that the public knowledge base today contains mostly GitHub repositories.

Jun 14, 2020

DeepCoder from Microsoft can leave programmers without work

Posted by in categories: information science, robotics/AI

Artificial intelligence (AI) is a broad field constituted of many disciplines like robotics or machine learning. The aim of AI is to create machines capable of performing tasks and cognitive functions that are otherwise only within the scope of human intelligence. To get there, machines must be able to learn these opportunities automatically instead of having each of them to be explicitly programmed end-to-end.

Another task of AI is to write programs. Similar technology was developed by Microsoft in conjunction with Cambridge University. They developed a program which is able to create other programs, borrowing code. The invention is called DeepCoder. This software that can take into account the requirements of developers and find the code fragments in a large database. You can see the work of scientists here.

“The potential for the automation of writing software code is just incredible. This means a reduction of the huge amount of effort that is required to develop code. Such a system will be much more productive than any man. In addition, you can create a system that was previously impossible to build”,

Jun 14, 2020

AI makes blurry faces look 64 times sharper

Posted by in categories: information science, robotics/AI

A new algorithm takes pixelated images of faces and creates realistic-looking versions with up to 64 times the resolution.

Jun 13, 2020

OpenAI API

Posted by in category: robotics/AI

We’re releasing an API for accessing new AI models developed by OpenAI. Unlike most AI systems which are designed for one use-case, the API today provides a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task. You can now request access in order to integrate the API into your product, develop an entirely new application, or help us explore the strengths and limits of this technology.

Jun 13, 2020

Ethics Review Boards and AI Self-Driving Cars

Posted by in categories: ethics, robotics/AI, transportation

What does this have to do with AI self-driving cars?

AI Self-Driving Cars Will Need to Make Life-or-Death Judgements

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One crucial aspect to the AI of self-driving cars is the need for the AI to make “judgments” about driving situations, ones that involve life-and-death matters.

Jun 13, 2020

Meet the Future Tech the U.S. Army Wants to Use for Its Soldiers

Posted by in categories: futurism, robotics/AI

AI, networked sensors, and heads up displays.

Jun 13, 2020

Are AI-Powered Killer Robots Inevitable?

Posted by in categories: drones, military, nuclear weapons, robotics/AI, singularity

Autonomous weapons present some unique challenges to regulation. They can’t be observed and quantified in quite the same way as, say, a 1.5-megaton nuclear warhead. Just what constitutes autonomy, and how much of it should be allowed? How do you distinguish an adversary’s remotely piloted drone from one equipped with Terminator software? Unless security analysts can find satisfactory answers to these questions and China, Russia, and the US can decide on mutually agreeable limits, the march of automation will continue. And whichever way the major powers lead, the rest of the world will inevitably follow.


Military scholars warn of a “battlefield singularity,” a point at which humans can no longer keep up with the pace of conflict.

Jun 13, 2020

MIT’s Tiny New Brain Chip Aims for AI in Your Pocket

Posted by in categories: information science, robotics/AI

The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant’s worth of electricity and racks of chips to learn.

That’s not to slander machine learning, but nature may have a tip or two to improve the situation. Luckily, there’s a branch of computer chip design heeding that call. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket.

The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors—chip components that can mimic their natural counterparts in the brain.

Jun 13, 2020

Facebook just released a database of 100,000 deepfakes to teach AI how to spot them

Posted by in categories: cybercrime/malcode, robotics/AI

Deepfakes⁠ have struck a nerve with the public and researchers alike. There is something uniquely disturbing about these AI-generated images of people appearing to say or do something they didn’t.

With tools for making deepfakes now widely available and relatively easy to use, many also worry that they will be used to spread dangerous misinformation. Politicians can have other people’s words put into their mouths or made to participate in situations they did not take part in, for example.

That’s the fear, at least. To a human eye, the truth is that deepfakes are still relatively easy to spot. And according to a report from cybersecurity firm DeepTrace Labs in October 2019, still the most comprehensive to date, they have not been used in any disinformation campaign. Yet the same report also found that the number of deepfakes posted online was growing quickly, with around 15,000 appearing in the previous seven months. That number will be far larger now.

Jun 13, 2020

Driverless cars might solve traffic problems, but at what social cost?

Posted by in categories: robotics/AI, transportation

Driverless cars are coming, and they’re likely to make life on the road easier and more convenient — for some of us. But will they create new ethical problems?