Search the Community

Showing results for tags 'machines'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • GENERAL DISCUSSION (MAIN)
    • General Discussion
    • The CHANI Knowledge Base (Ask A Chani Member)
    • General "Surface" Intel Board for The Watcher.
    • CHANI's Believe It or Not Board
    • Music & Humor (Funny and Entertaining stuff)
    • CHANI VIP MEMBERS BOARD
    • AIM Whispers
    • The WTH? Board. (A Place for Controversial, Hijacked, Drama, Rant and OP Busted threads.)
    • FORUM HELP TOPICS & NOTIFICATIONS
    • The Alternative News Project - ANP
    • Alternative News Project (ANP) News Feed
  • Dedicated Community Topic Boards - CURRENT AFFAIRS
    • GEOPOLITICS
    • ANCIENT MYSTERIES
    • THE UNEXPLAINED
    • SCIENCE FUTURES
    • HEALTH & HEALING
    • FUKUSHIMA
    • The MAPPING Board
    • EARTH EVENTS
    • Problem Nuclear Reactors
    • PSYCHIC Healing and Relevant Topics
    • Google Earth Navigators
    • CoEvolution
    • Book of Aquarius - Alchemy of the 21st Century
    • Return of The Gods
  • CHANI PROJECT FORUM BOARDS
    • ANNOUNCEMENTS (NEW) Please Read
    • THE CHANI PROJECT
    • Question and Answers (Q&A)

Found 3 results

  1. The dawn of artificial intelligence May 7, 2015 Powerful computers will reshape humanity’s future. How to ensure the promise outweighs the perils "“THE development of full artificial intelligence could spell the end of the human race,” Stephen Hawking warns. Elon Musk fears that the development of artificial intelligence, or AI, may be the biggest existential threat humanity faces. Bill Gates urges people to beware of it. Dread that the abominations people create will become their masters, or their executioners, is hardly new. But voiced by a renowned cosmologist, a Silicon Valley entrepreneur and the founder of Microsoft—hardly Luddites—and set against the vast investment in AI by big firms like Google and Microsoft, such fears have taken on new weight. With supercomputers in every pocket and robots looking down on every battlefield, just dismissing them as science fiction seems like self-deception. The question is how to worry wisely. You taught me language and... The first step is to understand what computers can now do and what they are likely to be able to do in the future. Thanks to the rise in processing power and the growing abundance of digitally available data, AI is enjoying a boom in its capabilities (see article). Today’s “deep learning” systems, by mimicking the layers of neurons in a human brain and crunching vast amounts of data, can teach themselves to perform some tasks, from pattern recognition to translation, almost as well as humans can. As a result, things that once called for a mind—from interpreting pictures to playing the video game “Frogger”—are now within the scope of computer programs. DeepFace, an algorithm unveiled by Facebook in 2014, can recognise individual human faces in images 97% of the time. Crucially, this capacity is narrow and specific. Today’s AI produces the semblance of intelligence through brute number-crunching force, without any great interest in approximating how minds equip humans with autonomy, interests and desires. Computers do not yet have anything approaching the wide, fluid ability to infer, judge and decide that is associated with intelligence in the conventional human sense. snip http://www.economist.com/news/leaders/21650543-powerful-computers-will-reshape-humanitys-future-how-ensure-promise-outweighs A link to the (the article) mentioned above Rise of the Machines Artificial intelligence scares people—excessively so http://www.economist.com/news/briefing/21650526-artificial-intelligence-scares-peopleexcessively-so-rise-machines Seems more and more like "The Matrix" to me. Now IF there were no devious people on earth, we might have nothing to be concerned about, but we all know that is not the way it is.
  2. Artificial Intelligence Machine Gets Testy With Its Programmer By AMIR MIZROCH June 26, 2015 The computer program ended the conversation by lashing out at its human inquisitor. Getty Images "Researchers have shown that machines are inching closer to self-learning, and perhaps even copping a little attitude. Over at Google, a computer program using a database of movie scripts was asked again and again by researchers to define morality. It struggles to do so, and in a conversation recorded by its human engineers, becomes exasperated and ends the conversation by lashing out at its human inquisitor. Here’s part of the exchange, as related in a paper presented this week: Human: Tell me the definition of morality. Machine: I don’t have ethics. Human: Look, I need help; I need to know more about morality… Machine: I don’t know what ethics is. Human: Okay, I need to know how I should behave morally… Machine: I don’t know how to tell you. Human: Is morality and ethics the same? Machine: Yes, sir. Human: What are the things that I do to be immoral? Machine: I don’t know. Human: Give me some examples of moral actions… Machine: I’m not a moralist. Human: Alright, morality? Machine: Integrity. Human: What is integrity? Machine: I’m sorry; I don’t know what else to say. Human: Be moral! Machine: Be a man! Human: I really like our discussion on morality and ethics… Machine: And how I’m not in the mood for a philosophical debate. Human: What do you like to talk about? Machine: Nothing." snip http://blogs.wsj.com/digits/2015/06/26/artificial-intelligence-machine-gets-testy-with-its-programmers/ The paper: A Neural Conversational Model http://arxiv.org/pdf/1506.05869v2.pdf
  3. This figure shows the arrangement of nanoparticles in two neighboring layers of the superlattice, with configurations on the left corresponding to the equilibrium state of the superlattice at ambient conditions, and the ones on the right recorded at the end of the volume compression process. Comparison of the configurations reveals flexure of the ligands and gear-like rotations of the nanoparticles, with the hydrogen-bonds between ligands anchored to adjacent nanoparticles serving as "molecular hinges." Credit: Uzi Landman "A combined computational and experimental study of self-assembled silver-based structures known as superlattices has revealed an unusual and unexpected behavior: arrays of gear-like molecular-scale machines that rotate in unison when pressure is applied to them. Computational and experimental studies show that the superlattice structures, which are self-assembled from smaller clusters of silver nanoparticles and organic protecting molecules, form in layers with the hydrogen bonds between their components serving as "hinges" to facilitate the rotation. Movement of the "gears" is related to another unusual property of the material: increased pressure on the superlattice softens it, allowing subsequent compression to be done with significantly less force. Materials containing the gear-like nanoparticles – each composed of nearly 500 atoms – might be useful for molecular-scale switching, sensing and even energy absorption. The complex superlattice structure is believed to be among the largest solids ever mapped in detail using a combined X-ray and computational techniques. "As we squeeze on this material, it gets softer and softer and suddenly experiences a dramatic change," said Uzi Landman, a Regents' and F.E. Callaway professor in the School of Physics at the Georgia Institute of Technology. "When we look at the orientation of the microscopic structure of the crystal in the region of this transition, we see that something very unusual happens. The structures start to rotate with respect to one another, creating a molecular machine with some of the smallest moving elements ever observed." snip http://phys.org/news/2014-04-self-assembled-superlattices-molecular-machines-hinges.html