Perhaps it’s serendipitous, then, that the machines have finally arrived. Truly smart, truly impressive robots and machine learning algorithms that may help usher in a new Green Revolution to keep humans fed on an increasingly mercurial planet. Think satellites that automatically detect drought patterns, tractors that eyeball plants and kill the sick ones, and an AI-powered smartphone app that can tell a farmer what disease has crippled their crop.
Forget scarecrows. The future of agriculture is in the hands of the machines.
A Digital Green Thumb
Deep learning is a powerful method of computing in which programmers don’t explicitly tell a computer what to do, but instead train it to recognize certain patterns. You could feed a computer photos of diseased and healthy plant leaves, labeled as such. From these it will learn what diseased and healthy leaves look like, and determine the health of new leaves on its own.
Math isn’t everyone’s strong suit, especially those who haven’t stretched that part of their brain since college. Thanks to the wonders of image recognition technology, we now have Mathpix, an iOS app that lets you point your phone camera at a problem and calculates solutions in seconds.
The interface looks like any standard camera app: simply drag the on-screen reticle over the equation and the app solves it and provides graph answers where appropriate. More useful is a step-by-step guide offering multiple methods to reach a solution, making this a bona fide educational tool. It uses image recognition to process problems and pings its servers to do the mathematical heavy lifting, so it likely requires an internet connection to work.
Mathpix was envisioned by Stanford PhD student Nico Jimenez, who was advised by Stanford grad Paul Ferrell. The app’s other developers are high schoolers Michael Lee and August Trollback, which is impressive for an app that claims to be the first to visually recognize and solve handwritten math problems.
I am glad that D. Whyte recognizes “If quantum computers are developed faster than anticipated, certification would mandate insecure modules, given the time to approve and implement new quantum resistant algorithms. Worse, it is conceivable that data encrypted by a certified module is more vulnerable than data encrypted by a non-certified module that has the option of using a quantum-safe encryption algorithm.”
Because many of us who are researching and developing in this space have seen the development pace accelerated this year and what was looking like we’re 10 years away is now looking like we’re less than 7 years.
Dr. William Whyte, Chief Scientist for Security Innovation, a cybersecurity provider and leader in the 2015 Gartner Magic Quadrant for Security Awareness Training, will be presenting at the Fourth International Cryptographic Module Conference in Ottawa, Ontario.
Australian physicists’ team has developed a new research assistant to carry out experiments in quantum mechanics in an artificial intelligence (AI) algorithm form, which quickly took control of the experiment, learned the job tasks and even innovated. In a statement, co-lead researcher Paul Wigley from the Australian National University (ANU) Research School of Physics and Engineering, said he didn’t expect that the machine would be able to conduct the experiment itself from scratch within an hour.
He added that in case a simple computer program had been used, it would have taken much more time than the age of the universe to go through all the combinations and work on it.
Scientists were looking forward to reconstruct an experiment that was awarded the 2001 Nobel Prize in Physics, which included very cold gas trapped in a laser beam called a Bose-Einstein condensate.
Given the fact that Los Alamos Labs have been and continue to advance cyber security work on the Quantum Internet as well as work in partnerships with other labs and universities; so, why isn’t Mason not collaborating with Los Alamos on developing an improved hacker proof net? Doesn’t look like the most effective and cost efficient approach.
Imagine burglars have targeted your home, but before they break in, you’ve already moved and are safe from harm.
Now apply that premise to protecting a computer network from attack. Hackers try to bring down a network, but critical tasks are a step ahead of them, thanks to complex algorithms. The dreaded “network down” or denial of service message never flashes on your screen.
That’s the basic idea behind new research by George Mason University researchers, who recently landed some $4 million in grants from the Defense Advanced Research Projects Agency (DARPA). George Mason’s researchers are leading an effort that includes Columbia University, Penn State University and BAE Systems.
Theoretical chemists at Princeton University have pioneered a strategy for modeling quantum friction, or how a particle’s environment drags on it, a vexing problem in quantum mechanics since the birth of the field. The study was published in the Journal of Physical Chemistry Letters (“Wigner–Lindblad Equations for Quantum Friction”). “It was truly a most challenging research project in terms of technical details and the need to draw upon new ideas,” said Denys Bondar, a research scholar in the Rabitz lab and corresponding author on the work.
Quantum friction may operate at the smallest scale, but its consequences can be observed in everyday life. For example, when fluorescent molecules are excited by light, it’s because of quantum friction that the atoms are returned to rest, releasing photons that we see as fluorescence. Realistically modeling this phenomenon has stumped scientists for almost a century and recently has gained even more attention due to its relevance to quantum computing.
An algorithm developed by Google is designed to encode thought, which could lead to computers with ‘common sense’ within a decade, says leading AI scientist.
If you’ve ever seen a “recommended item” on eBay or Amazon that was just what you were looking for (or maybe didn’t know you were looking for), it’s likely the suggestion was powered by a recommendation engine. In a recent interview, Co-founder of machine learning startup Delvv, Inc., Raefer Gabriel, said these applications for recommendation engines and collaborative filtering algorithms are just the beginning of a powerful and broad-reaching technology.
Gabriel noted that content discovery on services like Netflix, Pandora, and Spotify are most familiar to people because of the way they seem to “speak” to one’s preferences in movies, games, and music. Their relatively narrow focus of entertainment is a common thread that has made them successful as constrained domains. The challenge lies in developing recommendation engines for unbounded domains, like the internet, where there is more or less unlimited information.
“Some of the more unbounded domains, like web content, have struggled a little bit more to make good use of the technology that’s out there. Because there is so much unbounded information, it is hard to represent well, and to match well with other kinds of things people are considering,” Gabriel said. “Most of the collaborative filtering algorithms are built around some kind of matrix factorization technique and they definitely tend to work better if you bound the domain.”
Of all the recommendation engines and collaborative filters on the web, Gabriel cites Amazon as the most ambitious. The eCommerce giant utilizes a number of strategies to make item-to-item recommendations, complementary purchases, user preferences, and more. The key to developing those recommendations is more about the value of the data that Amazon is able to feed into the algorithm initially, hence reaching a critical mass of data on user preferences, which makes it much easier to create recommendations for new users.
“In order to handle those fresh users coming into the system, you need to have some way of modeling what their interest may be based on that first click that you’re able to extract out of them,” Gabriel said. “I think that intersection point between data warehousing and machine learning problems is actually a pretty critical intersection point, because machine learning doesn’t do much without data. So, you definitely need good systems to collect the data, good systems to manage the flow of data, and then good systems to apply models that you’ve built.”
Beyond consumer-oriented uses, Gabriel has seen recommendation engines and collaborative filter systems used in a narrow scope for medical applications and in manufacturing. In healthcare for example, he cited recommendations based on treatment preferences, doctor specialties, and other relevant decision-based suggestions; however, anything you can transform into a “model of relationships between items and item preferences” can map directly onto some form of recommendation engine or collaborative filter.
One of the most important elements that has driven the development of recommendation engines and collaborative filtering algorithms is the Netflix Prize, Gabriel said. The competition, which offered a $1 million prize to anyone who could design an algorithm to improve upon the proprietary Netflix’s recommendation engine, allowed entrants to use pieces of the company’s own user data to develop a better algorithm. The competition spurred a great deal of interest in the potential applications of collaborative filtering and recommendation engines, he said.
In addition, relative ease of access to an abundant amount of cheap memory is another driving force behind the development of recommendation engines. An eCommerce company like Amazon with millions of items needs plenty of memory to store millions of different of pieces of item and correlation data while also storing user data in potentially large blocks.
“You have to think about a lot of matrix data in memory. And it’s a matrix, because you’re looking at relationships between items and other items and, obviously, the problems that get interesting are ones where you have lots and lots of different items,” Gabriel said. “All of the fitting and the data storage does need quite a bit of memory to work with. Cheap and plentiful memory has been very helpful in the development of these things at the commercial scale.”
Looking forward, Gabriel sees recommendation engines and collaborative filtering systems evolving more toward predictive analytics and getting a handle on the unbounded domain of the internet. While those efforts may ultimately be driven by the Google Now platform, he foresees a time when recommendation-driven data will merge with search data to provide search results before you even search for them.
“I think there will be a lot more going on at that intersection between the search and recommendation space over the next couple years. It’s sort of inevitable,” Gabriel said. “You can look ahead to what someone is going to be searching for next, and you can certainly help refine and tune into the right information with less effort.”
While “mind-reading” search engines may still seem a bit like science fiction at present, the capabilities are evolving at a rapid pace, with predictive analytics at the bow.