32,000 old plant brought back to life…..

The oldest plant ever to be regenerated has been grown from 32,000-year-old seeds—beating the previous recordholder by some 30,000 years. (Related: “‘Methuselah’ Tree Grew From 2,000-Year-Old Seed.”)

Russian team discovered a seed cache of Silene stenophylla, a flowering plant native to Siberia, that had been buried by an Ice Age squirrel near the banks of the Kolyma River (map). Radiocarbon dating confirmed that the seeds were 32,000 years old.

The mature and immature seeds, which had been entirely encased in ice, were unearthed from 124 feet (38 meters) below the permafrost, surrounded by layers that included mammoth, bison, and woolly rhinoceros bones.

The mature seeds had been damaged—perhaps by the squirrel itself, to prevent them from germinating in the burrow. But some of the immature seeds retained viable plant material.

The team extracted that tissue from the frozen seeds, placed it in vials, and successfully germinated the plants, according to a new study. The plants—identical to each other but with different flower shapes from modern S. stenophylla—grew, flowered, and, after a year, created seeds of their own.

Artificial intelligence- turing test

Turing Test in Artificial Intelligence

The Turing test was developed by Alan Turing(Computer scientist) in 1950. He proposed that the “Turing test is used to determine whether or not a computer(machine) can think intelligently like humans”? 

Imagine a game of three players having two humans and one computer, an interrogator(as a human) is isolated from the other two players. The interrogator’s job is to try and figure out which one is human and which one is a computer by asking questions from both of them. To make things a harder computer is trying to make the interrogator guess wrongly. In other words, computers would try to be indistinguishable from humans as much as possible. 
 The “standard interpretation” of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination

The conversation between interrogator and computer would be like this: 
C(Interrogator): Are you a computer? 
A(Computer): No 

C: Multiply one large number to another, 158745887 * 56755647 
A: After a long pause, an incorrect answer! 

C: Add 5478012, 4563145 
A: (Pause about 20 seconds and then give as answer)10041157 

If the interrogator wouldn’t be able to distinguish the answers provided by both humans and computers then the computer passes the test and the machine(computer) is considered as intelligent as a human. In other words, a computer would be considered intelligent if its conversation couldn’t be easily distinguished from a human’s. The whole conversation would be limited to a text-only channel such as a computer keyboard and screen. 

He also proposed that by the year 2000 a computer “would be able to play the imitation game so well that an average interrogator will not have more than a 70-percent chance of making the right identification (machine or human) after five minutes of questioning.” No computer has come close to this standard. 

But in the year 1980, Mr John Searle proposed the “Chinese room argument“. He argued that the Turing test could not be used to determine “whether or not a machine is considered as intelligent like humans”. He argued that any machine like ELIZA and PARRY could easily pass the Turing Test simply by manipulating symbols of which they had no understanding. Without understanding, they could not be described as “thinking” in the same sense people do. We will discuss more this in the next article. 
 

In 1990, The Newyork business man Hugh Loebner announce to reward $100,000 prize for the first computer program to pass the test. however no AI program has so far come close to passing an undiluted Turing Test

 Artificial intelligence can be categories by job capacity and competence in the following two types: 

  1. Weak artificial intelligence: A type of artificial intelligence with a design for a personal assistant, customer relationship, video games, and questionnaires known as weak artificial intelligence. It consists of a small algorithm and data source. The algorithm and data source related to the data associated with the service industry some of the weak AI examples are – a.Amazon Alexa b. Railways Disha c. Apple’s Siri.
  2. Strong Artificial Intelligence: It is a system that carries on the task directly performed by humans like vehicle driving. This type of task is more complex and considered under a complicated system. They are program to handle situations in which the decision may be situational changes or unpredicted these kinds of systems developed under strong AI and testing of these systems very difficult but very useful for human beings. This categorization of AI able to replace the manual human operative task by a programmed machine. These machines today most popularly available with an intelligent system such as robots, which treated as the same rights as humans.

Turing Test: 

Alan Turing proposed a simple method of determining whether a machine can demonstrate human intelligence. If a machine engages in a conversation with a human how to process the data it has been demonstrated by a machine, He has proposed the following skills of the test as follows: 

The turning judges the conversational skills of humans. According to this test, a computer program can think a proper response for humans. This test matching the conversational data from the existing data through an algorithm and back respond to humans.

Quantum Entanglement

scribed with reference to each other, even though the individual objects may be spatially separated.

This leads to correlations between observable physical properties of the systems.

For example, it is possible to prepare two particles in a single quantum state such that when one is observed to be spin-up, the other one will always be observed to be spin-down and vice versa, this despite the fact that it is impossible to predict, according to quantum mechanics, which set of measurements will be observed.

As a result, measurements performed on one system seem to be instantaneously influencing other systems entangled with it.

But quantum entanglement does not enable the transmission of classical information faster than the speed of light.

Quantum entanglement has applications in the emerging technologies of quantum computing and quantum cryptography, and has been used to realize quantum teleportation experimentally.

At the same time, it prompts some of the more philosophically oriented discussions concerning quantum theory.

The correlations predicted by quantum mechanics, and observed in experiment, reject the principle of local realism, which is that information about the state of a system should only be mediated by interactions in its immediate surroundings.

Different views of what is actually occurring in the process of quantum entanglement can be related to different interpretations of quantum mechanics.

Wrap drive- A new hope

In 1994, physicist Miguel Alcubierre proposed a radical technology that would allow faster than light travel: the warp drive, a hypothetical way to skirt around the universe’s ultimate speed limit by bending the fabric of reality.

It was an intriguing idea – even NASA has been researching it at the Eagleworks laboratory – but Alcubierre’s proposal contained problems that seemed insurmountable. Now, a recent paper by US-based physicists Alexey Bobrick and Gianni Martire has resolved many of those issues and generated a lot of buzz.

But while Bobrick and Martire have managed to substantially demystify warp technology, their work actually suggests that faster-than-light travel will remain out of reach for beings like us, at least for the time being.

There is, however, a silver lining: warp technology may have radical applications beyond space travel.

Across the universe?

The story of warp drives starts with Einstein’s crowning achievement: general relativity. The equations of general relativity capture the way in which spacetime – the very fabric of reality – bends in response to the presence of matter and energy which, in turn, explains how matter and energy move.

General relativity places two constraints on interstellar travel. First, nothing can be accelerated past the speed of light (around 300,000 km per second). Even travelling at this dizzying speed it would still take us four years to arrive at Proxima Centauri, the nearest star to our Sun.

Second, the clock on a spaceship travelling close to the speed of light would slow down relative to a clock on Earth (this is known as time dilation). Assuming a constant state of acceleration, this makes it possible to travel the stars. One can reach a distant star that is 150 lightyears away within one’s lifetime. The catch, however, is that upon one’s return more than 300 years will have passed on Earth.

A new hope

This is where Alcubierre came in. He argued that the mathematics of general relativity allowed for “warp bubbles” – regions where matter and energy were arranged in such a way as to bend spacetime in front of the bubble and expand it to the rear in a way that allowed a “flat” area inside the bubble to travel faster than light.

To get a sense of what “flat” means in this context, note that spacetime is sort of like a rubber mat. The mat curves in the presence of matter and energy (think of putting a bowling ball on the mat). Gravity is nothing more than the tendency objects have to roll into the the dents created by things like stars and planets. A flat region is like a part of the mat with nothing on it.

Such a drive would also avoid the uncomfortable consequences of time dilation. One could potentially make a round trip into deep space and still be greeted by one’s nearest and dearest at home.

Type 3 civilization

Theoretical physicist Freeman Dyson proposed in the 1960s that such advanced civilizations could be detected by the telltale evidence of their mid-infrared (IR) emissions.

Earlier this year, Roger Griffith of Penn State University and co-authors compiled a catalogue of 93 candidate galaxies — culled from a total population of 100,000 objects — where unusually extreme mid-IR emission is observed. One problem is that although rare, this kind of emission can also be generated by natural astrophysical processes related to thermal emission from warm dust.

Prof Garrett has used radio measurements of the very best candidate galaxies and found that the vast majority of these systems have emission that is best explained by natural astrophysical processes.

“The original research at Penn State has already told us that such systems are very rare but the new analysis suggests that this is probably an understatement, and that advanced Kardashev Type III civilizations basically don’t exist in the local Universe,” said Prof Garrett, author of a paper published in the journal Astronomy & Astrophysics (arXiv.org preprint).

“In my view, it means we can all sleep safely in our beds tonight – an alien invasion doesn’t seem at all likely.”

“In particular, the galaxies in the sample follow a well-known global relation that holds for almost all galaxies – the so-called ‘mid-IR radio correlation.”

“The presence of radio emission at the levels expected from the correlation suggests that the mid-IR emission is not heat from alien factories but more likely emission from dust – for example, dust generated and heated by regions of massive star formation.”

According to Prof Garrett, his method could also be used to help identify less advanced, Kardashev Type II civilizations.

“It’s a bit worrying that Type III civilizations don’t seem to exist. It’s not what we would predict from the physical laws that explain so well the rest of the physical Universe,” Prof Garrett said.

“We’re missing an important part of the jigsaw puzzle here. Perhaps advanced civilizations are so energy efficient that they produce very low waste heat emission products – our current understanding of physics makes that a difficult thing to do.”

“What’s important is to keep on searching for the signatures of extraterrestrial intelligence until we fully understand just what is going on.”

Black hole

How small can a black hole be? For several decades, astronomers have worked to answer this question by tallying the black holes in our corner of the universe.

They’ve found plenty of big and medium-size ones over the years—including a supermassive monster at the heart of our galaxy. But until recently, they’ve seen no signs of small ones, and that’s presented a long-standing mystery in astrophysics.

Now, astronomers have discovered a black hole with just three times the mass of the sun, making it one of the smallest found to date—and it happens to be the closest known black hole, at just 1,500 light-years from Earth.      

The discovery “implies that there are many more [small black holes] that we might find if we increased the volume of space that we searched,” says Tharindu Jayasinghe, an astronomer at Ohio State University and lead author of a new paper detailing the discovery in the Monthly Notices of the Royal Astronomical Society. The finding “should create a push to find these systems.”

Jayasinghe and his colleagues have dubbed the object the “unicorn,” in part because it is unique, and in part because it was found in the constellation Monoceros, named by ancient astronomers after the Greek word for unicorn. By studying this unicorn and other objects like it, researchers hope to get a clearer picture of what happens to stars in the final moments of their lives and why some of them collapse to become black holes while others leave behind dense stellar…