Source: The Web, Fossbytes, Wired
Credits to: Brian Skinner, part 1.
AbigaIl beall, part 2
Edited by: AgentX
Two Indian researchers have claimed to achieve superconductivity at room temperature, sending the world of physicists into a state of frenzy.
(If it’s true I guess we will find out soon. )
As expected, these claims have raised several eyebrows. Also, a series of strange events involving the impersonation of a renowned physicist through email has added more to the bizarreness of this situation.
Superconductivity is the state when there is zero electrical resistance in a material – meaning that electrons can flow freely through the object without any hindrance. So far, this state has been achieved only by bringing the materials to extremely cold temperatures. (Most super computers at the moment run at between 21 to 30 Celsius 70- 86 degrees fahrenheit some are cooled with liquid nitrogen.)
But if superconductivity is made possible at room temperature, it would facilitate free transport of energy, incredibly fast computers – basically, change the world as it is now.
Last month, Dev Thapa and Anshu Pandey, chemical physicists from the Indian Institute of Science in Bangalore, India, posted a paper on arXiv claiming they have succeeded in achieving “superconductivity at ambient temperature and pressure conditions.”
What’s more baffling is that they did it by using a matrix of gold and silver particles — materials that have never exhibited superconductivity even at incredibly cold temperatures.
So physicists from all of the world began to take a closer look at the data as something didn’t seem right. Finally, Brian Skinner, a physicist at MIT, found a strange correlation between two independent measurements in Thapa and Pandey’s arXiv paper.
Here’s where I come in. Looking through the paper one evening, I got curious as to why one of their measurements showed lots of random noise at low temperature, but very little noise at high temperature. I thought I might be able to analyze the data by digitizing the plot.
But when I zoomed in closely on the figure, I saw something very surprising. Look closely at the green and blue data points here:
However, noise by definition is random. So there shouldn’t be any correlation between the noise measured in both the experiments. Yet the blue dots are exactly correlated to the green dots only a little offset.
Now, this graph could be the consequence of serial mistakes — when the same dataset is processed twice with a mistake made in the previous run. But its highly unlikely which raises the possibility of data misrepresentation or worse: faking of data.
Faking of data has severe repercussions like getting stripped of degree and retraction of papers leading scientific journals. We hope that’s not the case here.
There is also a lot of pressure on the duo researchers to share their test samples and data sets to the research community. But for now, they are keeping their samples and data under wraps while the scientific community awaits some resolution.
Whether it’s robots coming to take your job or AI being used in military drones, there is no shortage of horror stories about artificial intelligence, ( films like I,robot, Ex Machina the list can go on and on). Yet for all the potential it has to do harm, AI might have just as much potential to be a force for good in the world. (For country’s and companies it’s a race, and second place is no place at all.)
Harnessing the power for good will require international cooperation, and a completely new approach to tackling difficult ethical questions, the authors of an editorial published in the journal Science argue.
“From diagnosing cancer and understanding climate change to delivering risky and consuming jobs, AI is already showing its potential for good,” says Mariarosaria Taddeo, deputy director of the Digital Ethics Lab at Oxford University and one of the authors of the commentary. “The question is how can harness this potential?”
One example of the potential is the AI from Google’s DeepMind, which made correct diagnoses 94.5 per cent of the time in a trial with Moorfields Eye Hospital, looking at 50 common eye problems.
Another is helping us understand how the brain works, ( then the elite can control you).
The potential for AI to do good is immense, says Taddeo. Technology using artificial intelligence will have the capability to tackle issues “from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality, and appalling living standards,” she says.
(In my option It will be used no doubt, as just another control system.)
AI has already been used to sift through hundreds of bird sounds to estimate when songbirds arrived at their Arctic breeding grounds. This kind of analysis will allow researchers to understand how migratory animals are responding to climate change. Another way we are learning about climate change is through images of coral. An AI trained by looking at hundreds of pictures of coral helped researchers to discover a new species this year, and the technique will be used to analyse coral’s resistance to ocean warming.
Yet AI is not without its problems. In order to ensure it can do good, we first have to understand the risks.
The potential problems that come with artificial intelligence include a lack of transparency about what goes into the algorithms. One day it will be able to explain itself. For example, an autonomous vehicle developed by researchers at the chip maker Nvidia went on the roads in 2016, without anyone knowing how it made its driving decisions.
There is also a question over who is responsible if they make a mistake. Take the example of an autonomous car that’s about to be involved in a crash. The car could be programmed to act in the safest way for the passenger, or it could be programmed to protect the people in the other vehicle. Whether or not the manufacturer or the owner makes that decision, who is responsible for the fate of people involved in the car crash? Earlier this year, a team of scientists designed a way to put the decision in the hands of the human passenger. The ‘ethical knob’ would switch a car’s setting from “full altruist” to “full egoist”, with the middle setting being impartial.
Another issue is the potential for AI to unfairly discriminate. One example of this, says Tadeo, was Compas, a risk-assessment tool developed by a privately held company and used by the Wisconsin Department of Corrections. According to Taddeo, the system was used to decide whether to grant people parole and ended up discriminating against African-American and Hispanic men. When a team of journalists studied 10,000 criminal defendants in Broward County, Florida, it turned out the system predicted that black defendants pose a higher risk of recidivism than actually do in the real world, while predicting the opposite for white defendants.
Meanwhile, there is the issue of big data collection. AI is being used to track whole cities in China, drawing on data collected from various sources. For AI to progress, the amount of data needed for it to be successful is only going to increase. This means there will be increasing chances for people’s data to be collected, stored and manipulated without their consent, or even their knowledge.
But Taddeo says national and supranational laws and regulations, such as GDPR, will be crucial to establish boundaries and enforce principles. Yet ultimately, AI is going to be created globally and used around the world, potentially also in space, for example when hunting for exoplanets. So the ways we regulate it cannot be specific to boundaries on Earth.
There should be no universal regulator of artificial intelligence, she says. “AI will be implemented across a wide range of fields, from infrastructure-building and national defence to education, sport, and entertainment” she says. So, a one-size-fits-all approach would not work. “We need to consider culturally-dependent and domain-dependent differences.” For example, in one culture it may be deemed acceptable to take a photograph of a person, but another culture may not allow photographs to be taken for religious reasons.
There are a few initiatives already working on understanding AI technology and its foreseeable impact. These include AI4People, the first global forum in Europe on the social impact of AI, the EU’s strategy for AI and the EU Declaration on Cooperation on Artificial Intelligence. The EU declaration was signed earlier this year, and those involved pledged to work together on both AI ethics and using AI for good purposes, including modernising Europe’s education and training systems.
Other initiatives include the Partnership on Artificial Intelligence to Benefit People and Society, which both of the Science editorial’s authors are members of. “We designed the Partnership on AI, in part, so that we can invest more attention and effort on harnessing AI to contribute to solutions for some of humanity’s most challenging problems, including making advances in health and wellbeing, transportation, education, and the sciences” say Eric Horvitz and Mustafa Suleyman, the Partnership on AI’s founding co-chairs.
These are in their early stages, but more initiatives like this need to be created so an informed debate can be had, says Taddeo. The most important thing is we keep talking about it. “The debate on the governance of AI needs to involve scientists, academics, engineers, lawyers, policy-makers, politicians, civil society and business representatives” says Taddeo. “We need to understand the nature of post-AI societies and the values that should underpin the design, regulation, and use of AI in these societies.”
After all, we are only humans. So the risk remains that we may misuse or underuse AI.
“In this respect, AI is not different from electricity or steam engines” says Taddeo. “It is our responsibility to steer the use of AI in such a way to foster human flourishing and well-being and mitigate the risks that this technology brings about.”
Here are the top 25 companies to watch on AI development.
1. AIBrain 2. Amazon 3. Anki 4. Apple 5. Banjo 6. CloudMinds 7. Deepmind 8. Facebook 9. Google 10. H2O 11. IBM 12. iCarbonX 13. Intel 14. Iris AI 15. Microsoft 16. Next IT 17. Nvidia 18. OpenAI 19. Salesforce 20. SoundHound 21. Twilio 22. Twitter 23. ViSenze 24. X.ai 25. Zebra Medical Vision.
What do you think AI and supercomputers will bring us ?
Maybe the answers of the universe but at what cost to society.
As always make you’re own mind up.
Always question what you see and hear.