Friday, December 26, 2014

Can Collimated Extraterrestrial Signals be Intercepted?

Can Collimated Extraterrestrial Signals be Intercepted? by Duncan H. Forgan (Submitted on 27 Oct 2014) The Optical Search for Extraterrestrial Intelligence (OSETI) attempts to detect collimated, narrowband pulses of electromagnetic radiation. These pulses may either consist of signals intentionally directed at the Earth, or signals between two star systems with a vector that unintentionally intersects the Solar System, allowing Earth to intercept the communication. But should we expect to be able to intercept these unintentional signals? And what constraints can we place upon the frequency of intelligent civilisations if we do? We carry out Monte Carlo Realisation simulations of interstellar communications between civilisations in the Galactic Habitable Zone (GHZ) using collimated beams. We measure the frequency with which beams between two stars are intercepted by a third. The interception rate increases linearly with the fraction of communicating civilisations, and as the cube of the beam opening angle, which is somewhat stronger than theoretical expectations, which we argue is due to the geometry of the GHZ. We find that for an annular GHZ containing 10,000 civilisations, intersections are unlikely unless the beams are relatively uncollimated. These results indicate that optical SETI is more likely to find signals deliberately directed at the Earth than accidentally intercepting collimated communications. Equally, civilisations wishing to establish a network of communicating species may use weakly collimated beams to build up the network through interception, if they are willing to pay a cost penalty that is lower than that meted by fully isotropic beacons. Future SETI searches should consider the possibility that communicating civilisations will attempt to strike a balance between optimising costs and encouraging contact between civilisations, and look for weakly collimated pulses as well as narrow-beam pulses directed deliberately at the Earth.

Friday, August 17, 2012

The Singularity: A Philosophical Analysis, by David J. Chalmers

Link to paper.

What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I. J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Thursday, April 8, 2010

The Singularity: A Philosophical Analysis, by David J. Chalmers

http://consc.net/papers/singularity.pdf

"What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”.

The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”:

'Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.'

The key idea is that a machine that is more intelligent than humans will be better than humans at designing machines. So it will be capable of designing a machine more intelligent than the most intelligent machine that humans can design. So if it is itself designed by humans, it will be capable of designing a machine more intelligent than itself. By similar reasoning, this next machine will also be capable of designing a machine more intelligent than itself. If every machine in turn does what it is capable of, we should expect a sequence of ever more intelligent machines..."

David Chalmers at Singularity Summit 2009 -- Simulation and the Singularity from Michael Anissimov on Vimeo.

Wednesday, October 14, 2009

Bootstrapped-Brain

The term "Bootstrapped-Brain" refers generically to any cognitive entity or collective that can redesign itself, perform self-replication, and has the potential ability to develop fundamental technologies in much shorter timescales than humans could.

A possible consequence of the existence of a Bootstrapped-Brain is the loss of operational control by human authorities, leading to a situation that places the human race and biosphere in peril.

A critical problem to solve is how to implement "Friendly artificial intelligence" in Bootstrapped-Brains, to ensure that humankind is not seen as a nuisance or threat to be eliminated. The ability to monitor, cooperate and negotiate with functioning Bootstrapped-Brains appears to be a prerequisite for continued human survival. An appropriate variant of author Isaac Asimov's Three Laws of Robotics may supply the basis for peaceful relations between the human race and its technological progeny.

Some observers of the rapid advances in artificial intelligence, biotechnology and nanotechnology, in particular Ray Kurzweil (author of The Singularity is Near), use a related term "Technological Singularity" to signify an approaching golden era of supra-human, self-sustaining and accelerating technological development that surpasses human comprehension. Kurzweil predicts that the human race will accrue many benefits including life extension, machine consciousness and unprecedented control over the environment between now and the full onset of the Technological Singularity, which he projects will unfold by 2045.

See also

* Blood Music
* Bootstrap
* Grey goo
* Matrioshka Brain
* Rampancy
* Robot
* Singularity Institute for Artificial Intelligence
* Stanislaw Lem