Toggle light / dark theme

New requirement if you’re a Smartphone device provider and trying to sell in India.


Starting next year, all mobile phones sold across India must include a panic button, local news outlets are reporting. In addition, by 2018, all cell phones need to come with a built-in GPS chip, so a person in trouble can be more easily found.

“Technology is solely meant to make human life better and what better than using it for the security of women,” communications and IT minister Ravi Shankar Prasad said in a statement, according to The Economic Times. “I have taken a decision that from January 1, 2017, no cell phone can be sold without a provision for panic button and from January 1, 2018, mobile sets should have in-built GPS.”

According to the Times, those with feature phones can press keys 5 and 9 to alert local law enforcement to an emergency under the new policy. On smartphones, vendors will be required to display an “emergency” button. Smartphone makers can also build in a feature that alerts law enforcement once the on/off button is pressed three times in succession.

Read more

Don’t kill the messanger; I’m just sharing.


Yesterday Trump acknowledged the power of technology to help the USA in his future plans.

In a major foreign policy speech, yesterday, Republican presidential candidate Donald Trump said the U.S. needs to make better use of “3D printing, artificial intelligence, and cyberwarfare.”

“We need to think smarter about areas where our technological superiority – and nobody comes close – gives us an edge,” he explained. “This includes 3D printing, artificial intelligence, and cyber-warfare.”

Read more

My new Vice Motherboard article on environmentalism and why going green isn’t enough. Only radical technology can restore the world to a pristine condition—and that requires politicians not afraid of the future:


I’m worried that conservatives like Cruz will try to stop new technologies that will change our battle in combating a degrading Earth

But there are people who can save the endangered species on the planet. And they will soon dramatically change the nature of animal protection. Those people may have little to do with wildlife, but their genetics work holds the answer to stable animal population levels in the wild. In as little as five years, we may begin stocking endangered wildlife in places where poachers have hunted animals to extinction. We’ll do this like we stock trout streams in America. Why spend resources in a losing battle to save endangered wildlife from being poached when you can spend the same amount to boost animal population levels ten-fold? Maye even 100-fold. This type of thinking is especially important in our oceans, which we’ve bloody well fished to near death.

As a US Presidential candidate who believes that all problems can be solved by science, I believe the best way to fix all of our environmental dilemmas is via technological innovation—not attempting to reverse our carbon footprint, recycle more, or go green.

As noted earlier, the obvious reason going green doesn’t work—even though I still think it’s a good disciplinary policy for humans—is the sheer impossibility of getting the developed world to stop… well, developing. You simply cannot tell an upcoming Chinese family not to drive cars. And you can’t tell a burgeoning Indian city to only use renewable resources when it’s cheaper to use fossil fuels. You also can’t tell indigenous Brazilian parents to stop poaching when their children are hungry. These people will not listen. They want what they want, and are willing to partially destroy the planet to get it—especially when they know the developed world already possesses it.

Read more

Hmmmm;


Liberty International Underwriters (LIU), part of Liberty Mutual Insurance, has launched a cyber extortion endorsement to its Product Recall and Contamination insurance policy for food and beverage companies.

This endorsement offers coverage to food and beverage policyholders for cyber extortion monies and consultant costs up to the policy sub-limit for acts against production and day-to-day operations.

“With operations being mostly automated now and an increasing reliance on technology, the food and beverage industry faces a very real risk of having its systems hijacked by cyber criminals and held for ransom,” said LIU Senior Vice President of Global Crisis Management, Jane McCarthy. “But what many companies don’t realize is that cyber extortion is not always covered under a typical cyber policy or by a general liability policy. We developed this to address the risks associated with new technology and –‘ransomware’–, malicious software designed to block access to a computer system until a sum of money is paid.”

Read more

At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.

Of course, as a founder of contemporary transhumanism, Bostrom does not wish to stop artificial intelligence research, and he ultimately believes that we can prevent worst case scenarios if we act now. Thus, we see a growing trade in the management of ‘existential risks’, which focusses on how we might prevent if not predict any such tech-based species-annihilating prospects. Nevertheless, this turn of events has made some observers reasonably wonder whether indeed it might not be better simply to put a halt to artificial intelligence research altogether. As a result, the precautionary principle, previously invoked in the context of environmental and health policy, has been given a new lease on life as generalized world-view.

The idea of ‘existential risk’ capitalizes on the prospect of a very unlikely event that, were it to pass, would be extremely catastrophic for the human condition. Thus, the high value of the outcome psychologically counterbalances its low probability. It’s a bit like Pascal’s wager, whereby the potentially negative consequences of you not believing in God – to wit, eternal damnation — rationally compels you to believe in God, despite your instinctive doubts about the deity’s existence.

However, this line of reasoning underestimates both the weakness and the strength of human intelligence. On the one hand, we’re not so powerful as to create a ‘weapon of mass destruction’, however defined, that could annihilate all of humanity; on the other, we’re not so weak as to be unable to recover from whatever errors of design or judgement that might be committed in the normal advance of science and technology in the human life-world. I make this point not to counsel complacency but to question whether ‘existential risk’ is really the high concept that it is cracked up to be. I don’t believe it is.

In fact, we would do better to revisit the signature Cold War way of thinking about these matters, which the RAND Corporation strategist Herman Kahn dubbed ‘thinking the unthinkable’. What he had in mind was the aftermath of a thermonuclear war in which, say, 25–50% of the world’s population is wiped out over a relatively short period of time. How do we rebuild humanity under those circumstances? This is not so different from ‘the worst case scenarios’ proposed nowadays, even under conditions of severe global warming. Kahn’s point was that we need now to come up with the relevant new technologies that would be necessary the day after Doomsday. Moreover, such a strategy was likely to be politically more tractable than trying actively to prevent Doomsday, say, through unilateral nuclear disarmament.

And indeed, we did largely follow Kahn’s advice. And precisely because Doomsday never happened, we ended up in peacetime with the riches that we have come to associate with Silicon Valley, a major beneficiary of the US federal largesse during the Cold War. The internet was developed as a distributed communication network in case the more centralized telephone system were taken down during a nuclear attack. This sort of ‘ahead of the curve’ thinking is characteristic of military-based innovation generally. Warfare focuses minds on what’s dispensable and what’s necessary to preserve – and indeed, how to enhance that which is necessary to preserve. It is truly a context in which we can say that ‘necessity is the mother of invention’. Once again, and most importantly, we win even – and especially – if Doomsday never happens.

An interesting economic precedent for this general line of thought, which I have associated with transhumanism’s ‘proactionary principle’, is what the mid-twentieth century Harvard economic historian Alexander Gerschenkron called ‘the relative advantage of backwardness’. The basic idea is that each successive nation can industrialise more quickly by learning from its predecessors without having to follow in their footsteps. The ‘learning’ amounts to innovating more efficient means of achieving and often surpassing the predecessors’ level of development. The post-catastrophic humanity would be in a similar position to benefit from this sense of ‘backwardness’ on a global scale vis-à-vis the pre-catastrophic humanity.

Doomsday scenarios invariably invite discussions of our species’ ‘resilience’ and ‘adaptability’, but these terms are far from clear. I prefer to start with a distinction drawn in cognitive archaeology between ‘reliable’ and ‘maintainable’ artefacts. Reliable artefacts tend to be ‘overdesigned’, which is to say, they can handle all the anticipated forms of stress, but most of those never happen. Maintainable artefacts tend to be ‘underdesigned’, which means that they make it easy for the user to make replacements when disasters strike, which are assumed to be unpredictable.

In a sense, ‘resilience’ and ‘adaptability’ could be identified with either position, but the Cold War’s proactionary approach to Doomsday suggests that the latter would be preferable. In other words, we want a society that is not so dependent on the likely scenarios – including the likely negative ones — that we couldn’t cope in case a very unlikely, very negative scenario comes to pass. Recalling US Defence Secretary Donald Rumsfeld’s game-theoretic formulation, we need to address the ‘unknown unknowns’, not merely the ‘known unknowns’. Good candidates for the relevant ‘unknown unknowns’ are the interaction effects of relatively independent research and societal trends, which while benign in themselves may produce malign consequences — call them ‘emergent’, if you wish.

It is now time for social scientists to present both expert and lay subjects with such emergent scenarios and ask them to pinpoint their ‘negativity’: What would be potentially lost in the various scenarios which would be vital to sustain the ‘human condition’, however defined? The answers would provide the basis for future innovation policy – namely, to recover if not strengthen these vital features in a new guise. Even if the resulting innovations prove unnecessary in the sense that the Doomsday scenarios don’t come to pass, nevertheless they will make our normal lives better – as has been the long-term effect of the Cold War.

References

Bleed, P. (1986). ‘The optimal design of hunting weapons: Maintainability or reliability?’ American Antiquity 51: 737– 47.

Bostrom, N. (2014). Superintelligence. Oxford: Oxford University Press.

Fuller, S. and Lipinska, V. (2014). The Proactionary Imperative. London: Palgrave (pp. 35–36).

Gerschenkron, A. (1962). Economic Backwardness in Historical Perspective. Cambridge MA: Harvard University Press.

Kahn, H. (1960). On Thermonuclear War. Princeton: Princeton University Press.

I see articles and reports like the following about military actually considering fully autonomous missals, drones with missals, etc. I have to ask myself what happened to the logical thinking.


A former Pentagon official is warning that autonomous weapons would likely be uncontrollable in real-world situations thanks to design failures, hacking, and external manipulation. The answer, he says, is to always keep humans “in the loop.”

The new report, titled “ Autonomous Weapons and Operational Risk,” was written by Paul Scharre, a director at the Center for a New American Security. Scharre used to work at the office of the Secretary of Defense where he helped the US military craft its policy on the use of unmanned and autonomous weapons. Once deployed, these future weapons would be capable of choosing and engaging targets of their own choosing, raising a host of legal, ethical, and moral questions. But as Scharre points out in the new report, “They also raise critically important considerations regarding safety and risk.”

As Scharre is careful to point out, there’s a difference between semi-autonomous and fully autonomous weapons. With semi-autonomous weapons, a human controller would stay “in the loop,” monitoring the activity of the weapon or weapons system. Should it begin to fail, the controller would just hit the kill switch. But with autonomous weapons, the damage that be could be inflicted before a human is capable of intervening is significantly greater. Scharre worries that these systems are prone to design failures, hacking, spoofing, and manipulation by the enemy.

Read more