Menu

Blog

Archive for the ‘information science’ category: Page 165

Jul 6, 2020

How AI Sees Through the Looking Glass: Things Are Different on the Other Side of the Mirror

Posted by in categories: information science, robotics/AI, transportation

Text is backward. Clocks run counterclockwise. Cars drive on the wrong side of the road. Right hands become left hands.

Intrigued by how reflection changes images in subtle and not-so-subtle ways, a team of Cornell researchers used artificial intelligence to investigate what sets originals apart from their reflections. Their algorithms learned to pick up on unexpected clues such as hair parts, gaze direction and, surprisingly, beards – findings with implications for training machine learning models and detecting faked images.

Jul 1, 2020

Scientists Fire Up a Commercially Available Desktop Quantum Computer

Posted by in categories: computing, education, information science, quantum physics

Scientists suggest a desktop quantum computer based on nuclear magnetic resonance (NMR) could soon be on its way to a classroom near you. Although the device might not be suited to handle large quantum applications, the makers say it could help students learn about quantum computing.

SpinQ Chief Scientist Prof. Bei Zeng from University of Guelph, announced the SpinQ Gemini, a two-qubit desktop quantum computer, at the industry session of the Quantum Information Processing (QIP2020) conference, which is held recently in Shenzhen, China. It is the first time that a desktop quantum computer is commercially available, according to the researchers.

SpinQ Gemini is built by the state-of-the-art technology of permanent magnets, providing 1T magnetic field, running at room temperature, and maintenance free. It demonstrates quantum algorithms such as Deutsch’s algorithm and Grover’s algorithm for teaching quantum computing to university and high school students, also provides advanced models for quantum circuit design and control sequence design for researchers.

Jun 29, 2020

NASA’s New Moon-Bound Space Suits Will Get a Boost From AI

Posted by in categories: information science, robotics/AI, space

Engineers are turning to generative design algorithms to build components for NASA’s next-generation space suit—the first major update in decades.

Jun 29, 2020

How Chinese tech giants are disrupting insurance industry with pooled funds

Posted by in categories: biotech/medical, finance, health, information science, internet, mobile phones

However, the situation has been improving as Chinese tech giants including e-commerce company Alibaba, search engine Baidu, on-demand delivery company Meituan Dianping, ride-hailing operator Didi Chuxing and smartphone maker Xiaomi now offer more affordable health care plans via mutual aid platforms, which operate as a collective claim-sharing mechanism.


China’s online mutual aid platforms are disrupting old school insurance companies by leveraging big data and internet finance technologies to offer low cost medical coverage.

Jun 28, 2020

Mathematical Breakthrough Makes It Easier to Explore Quantum Entanglement

Posted by in categories: information science, mathematics, particle physics, quantum physics

Updated mathematical techniques that can distinguish between two types of ‘non-Gaussian curve’ could make it easier for researchers to study the nature of quantum entanglement.

Quantum entanglement is perhaps one of the most intriguing phenomena known to physics. It describes how the fates of multiple particles can become entwined, even when separated by vast distances. Importantly, the probability distributions needed to define the quantum states of these particles deviate from the bell-shaped, or ‘Gaussian’ curves which underly many natural processes. Non-Gaussian curves don’t apply to quantum systems alone, however. They can also be composed of mixtures of regular Gaussian curves, producing difficulties for physicists studying quantum entanglement. In new research published in EPJ D, Shao-Hua Xiang and colleagues at Huaihua University in China propose a solution to this problem. They suggest an updated set of equations that allows physicists to easily check whether or not a non-Gaussian state is genuinely quantum.

As physicists make more discoveries about the nature of quantum entanglement, they are rapidly making progress towards advanced applications in the fields of quantum communication and computation. The approach taken in this study could prove to speed up the pace of these advances. Xiang and colleagues acknowledge that while all previous efforts to distinguish between both types of non-Gaussian curve have had some success, their choices of Gaussian curves as a starting point have so far meant that no one approach has yet proven to be completely effective. Based on the argument that there can’t be any truly reliable Gaussian reference for any genuinely quantum non-Gaussian state, the researchers present a new theoretical framework.

Jun 27, 2020

Future shocks: 17 technology predictions for 2025

Posted by in categories: biotech/medical, information science, robotics/AI

1. AI-optimized manufacturing

Paper and pencil tracking, luck, significant global travel and opaque supply chains are part of today’s status quo, resulting in large amounts of wasted energy, materials and time. Accelerated in part by the long-term shutdown of international and regional travel by COVID-19, companies that design and build products will rapidly adopt cloud-based technologies to aggregate, intelligently transform, and contextually present product and process data from manufacturing lines throughout their supply chains. By 2025, this ubiquitous stream of data and the intelligent algorithms crunching it will enable manufacturing lines to continuously optimize towards higher levels of output and product quality – reducing overall waste in manufacturing by up to 50%. As a result, we will enjoy higher quality products, produced faster, at lower cost to our pocketbooks and the environment.

Anna-Katrina Shedletsky, CEO and Founder of Instrumental.

Jun 27, 2020

Pagaya raises $102 million to manage assets with AI

Posted by in categories: finance, information science, robotics/AI, transportation

Pagaya, an AI-driven institutional asset manager that focuses on fixed income and consumer credit markets, today announced it raised $102 million in equity financing. CEO Gal Krubiner said the infusion will enable Pagaya to grow its data science team, accelerate R&D, and continue its pursuit of new asset classes including real estate, auto loans, mortgages, and corporate credit.

Pagaya applies machine intelligence to securitization — the conversion of an asset (usually a loan) into marketable securities (e.g., mortgage-backed securities) that are sold to other investors — and loan collateralization. It eschews the traditional method of securitizing pools of previously assembled asset-backed securities (ABS) for a more bespoke approach, employing algorithms to compile discretionary funds for institutional investors such as pension funds, insurance companies, and banks. Pagaya selects and buys individual loans by analyzing emerging alternative asset classes, after which it assesses their risk and draws on “millions” of signals to predict their returns.

Pagaya’s data scientists can build algorithms to track activities, such as auto loans made to residents in cities and even specific neighborhoods, for instance. The company is only limited by the amount of data publicly available; on average, Pagaya looks at decades of information on borrowers and evaluates thousands of variables.

Jun 21, 2020

The case for self-explainable AI

Posted by in categories: biotech/medical, information science, robotics/AI

For instance, suppose a neural network has labeled the image of a skin mole as cancerous. Is it because it found malignant patterns in the mole or is it because of irrelevant elements such as image lighting, camera type, or the presence of some other artifact in the image, such as pen markings or rulers?

Researchers have developed various interpretability techniques that help investigate decisions made by various machine learning algorithms. But these methods are not enough to address AI’s explainability problem and create trust in deep learning models, argues Daniel Elton, a scientist who researches the applications of artificial intelligence in medical imaging.

Elton discusses why we need to shift from techniques that interpret AI decisions to AI models that can explain their decisions by themselves as humans do. His paper, “Self-explaining AI as an alternative to interpretable AI,” recently published in the arXiv preprint server, expands on this idea.

Jun 19, 2020

Scientists built a new quantum computer. It’s made of five atoms and “self-destroys” after each use

Posted by in categories: computing, information science, particle physics, quantum physics

Scientists managed another breakthrough. They built a quantum computer that can execute the difficult Shor’s algorithm. It’s just five atoms big, but the experts claim it will be easy to scale it up.

Jun 18, 2020

OpenAI’s New Text Generator Writes Even More Like a Human

Posted by in categories: information science, robotics/AI

The data came from Common Crawl, a non-profit that scans the open web every month and downloads content from billions of HTML pages then makes it available in a special format for large-scale data mining. In 2017 the average monthly “crawl” yielded over three billion web pages. Common Crawl has been doing this since 2011, and has petabytes of data in over 40 different languages. The OpenAI team applied some filtering techniques to improve the overall quality of the data, including adding curated datasets like Wikipedia.

GPT stands for Generative Pretrained Transformer. The “transformer” part refers to a neural network architecture introduced by Google in 2017. Rather than looking at words in sequential order and making decisions based on a word’s positioning within a sentence, text or speech generators with this design model the relationships between all the words in a sentence at once. Each word gets an “attention score,” which is used as its weight and fed into the larger network. Essentially, this is a complex way of saying the model is weighing how likely it is that a given word will be preceded or followed by another word, and how much that likelihood changes based on the other words in the sentence.

Through finding the relationships and patterns between words in a giant dataset, the algorithm ultimately ends up learning from its own inferences, in what’s called unsupervised machine learning. And it doesn’t end with words—GPT-3 can also figure out how concepts relate to each other, and discern context.