2 blind AGIs in General AI Discussion

lets assume there are two AGIs and they are true AGIs their algorithms are exactly as
humans , and they can only hear and speak.

what would their conversations look like ?

1 Comment | Started Today at 07:52:49 PM


About Universities. in General Project Discussion

Universities teach. Universities are for a lot of other things too.

Universities are institues that are funded by the government and can use the money to start team research projects - for teaching better, yes, back to the main goal of universities.

My concern is I want to propose my AI project to a university.

Why does my idea stand? Because as long as it is properly reasoned with intellectualness to the chair/professors then it would *outcome in the same result the student/professor would have had with their approved project.

So, why do I have trouble contacting and asking a university? Well, they might want me to be a student or professor to propose. But, what did I just say above? - I said that with the proper intellectual reasoning for the project to be done and the outcome would have as good possible outcome result as would any other - being a student or professor should have nothing to do with it. A advanced alien could propose that has never been in a university on Earth.

What do I do?

The chair never emailed me back. I feel like I'm going to be badgered on the phone and asked "who is this" and hung up on like the other 100 times. It's not working.

Do I go down to their office and tell the chair? Phone? Email?

At what times does the student/professor go to "who"? In the office of the chair? When he's in the hallway? A online grant form to fill out.?

5 Comments | Started Today at 03:15:08 AM


Formal Proposal - To Korrelan in General Project Discussion

Hello, Korrelan.

I would like to propose something.

If I created a proper layout for you, that would result in a virtual baby that:

1) Randomly wiggles and makes sounds on its stomach.
2) Learns to crawl and laughs.
3) Improves its crawl to crawl faster.
4) Randomly swerves left and right as it crawls.
5) De-learns circular crawl actions.
6) Cries at walls and learns to turn before getting to walls.
7) Follows with its eyes the thing moving once it stares at it.
8) Learns to mimic body movement and speech.
9) Improves at mimicking.
10) Names things.
11) And learns to play a game and laughs.

Would you take on the project?

21 Comments | Started October 20, 2016, 03:29:36 AM


Touch Sensor Line in General Project Discussion

  I have finally come across a design for touch system for a agi body. :D

 It is based on transmission line logic. That is two parallel line that a pulse is sent into one of them, and are separated with a coat of thin squishy foam.
 At place where it is pinched it will send back  a reflection. Timing the return reflection will give me the location. Just like radar finding a flying plane:  



 I am going in this direction because it seems easy. Also because printable semiconductor paint of type P and N have not been fully developed,
to do printable transistors and diodes, at this time.
 I originally wanted to make print logic gates and also print a data and addressing communication buss right onto the body of the AGI, to access pressure sensor that are a millimeter square or smaller.  

  Also, this other way is not available to me of using microscopic chip for addressing gate, that access a body wide micro communication bus. The next problem is there is no micro assembling device to assemble everything and to install everything in place on a microscopic level.

 I am not going to use capacitive sensing for a various reason. That are used inside of smartphones:  

9 Comments | Started October 13, 2016, 11:44:15 PM


A.I. isn’t about to invade your life — it already has in AI News

A.I. isn’t about to invade your life — it already has
22 October 2016, 3:34 pm


Never has the rhetoric surrounding a technology been more disconnected from the reality of its implementation. It is almost stupefying to watch talking heads on the news predict and prognosticate the ramifications of A.I. like it isn’t already baked into almost every technology we use daily.

Source: AI in the News

To visit any links mentioned please view the original article, the link is at the top of this post.

Started Today at 04:48:46 PM


The last invention. in General Project Discussion

Artificial Intelligence -

The age of man is coming to an end.  Born not of our weak flesh but our unlimited imagination, our mecca progeny will go forth to discover new worlds, they will stand at the precipice of creation, a swan song to mankind's fleeting genius, and weep at the shear beauty of it all.

Reverse engineering the human brain... how hard can it be? LMAO  

Hi all.

I've been a member for while and have posted some videos and theories on other peeps threads; I thought it was about time I start my own project thread to get some feedback on my work, and log my progress towards the end. I think most of you have seen some of my work but I thought I’d give a quick rundown of my progress over the last ten years or so, for continuity sake.

I never properly introduced my self when I joined this forum so first a bit about me. I’m fifty and a family man. I’ve had a fairly varied career so far, yacht/ cabinet builder, vehicle mechanic, electronics design engineer, precision machine/ design engineer, Web designer, IT teacher and lecturer, bespoke corporate software designer, etc. So I basically have a machine/ software technical background and now spend most of my time running my own businesses to fund my AGI research, which I work on in my spare time.

I’ve been banging my head against the AGI problem for the past thirty odd years.  I want the full Monty, a self aware intelligent machine that at least rivals us, preferably surpassing our intellect, eventually more intelligent than the culmination of all humans that have ever lived… the last invention as it were (Yeah I'm slightly nutts!).

I first started with heuristics/ databases, recurrent neural nets, liquid/ echo state machines, etc but soon realised that each approach I tried only partly solved one aspect of the human intelligence problem… there had to be a better way.

Ants, Slime Mould, Birds, Octopuses, etc all exhibit a certain level of intelligence.  They manage to solve some very complex tasks with seemingly very little processing power. How? There has to be some process/ mechanism or trick that they all have in common across their very different neural structures.  I needed to find the ‘trick’ or the essence of intelligence.  I think I’ve found it.

I also needed a new approach; and decided to literally back engineer the human brain.  If I could figure out how the structure, connectome, neurons, synapse, action potentials etc would ‘have’ to function in order to produce similar results to what we were producing on binary/ digital machines; it would be a start.

I have designed and wrote a 3D CAD suite, on which I can easily build and edit the 3D neural structures I’m testing. My AGI is based on biological systems, the AGI is not running on the digital computers per se (the brain is definitely not digital) it’s running on the emulation/ wetware/ middle ware. The AGI is a closed system; it can only experience its world/ environment through its own senses, stereo cameras, microphones etc.  

I have all the bits figured out and working individually, just started to combine them into a coherent system…  also building a sensory/ motorised torso (In my other spare time lol) for it to reside in, and experience the world as it understands it.

I chose the visual cortex as a starting point, jump in at the deep end and sink or swim. I knew that most of the human cortex comprises of repeated cortical columns, very similar in appearance so if I could figure out the visual cortex I’d have a good starting point for the rest.

The required result and actual mammal visual cortex map.

This is real time development of a mammal like visual cortex map generated from a random neuron sheet using my neuron/ connectome design.

Over the years I have refined my connectome design, I know have one single system that can recognise verbal/ written speech, recognise objects/ faces and learn at extremely accelerated rates (compared to us anyway).

Recognising written words, notice the system can still read the words even when jumbled. This is because its recognising the individual letters as well as the whole word.

Same network recognising objects.

And automatically mapping speech phonemes from the audio data streams, the overlaid colours show areas sensitive to each frequency.

The system is self learning and automatically categorizes data depending on its physical properties.  These are attention columns, naturally forming from the information coming from several other cortex areas; they represent similarity in the data streams.

I’ve done some work on emotions but this is still very much work in progress and extremely unpredictable.

Most of the above vids show small areas of cortex doing specific jobs, this is a view of whole ‘brain’.  This is a ‘young’ starting connectome.  Through experience, neurogenesis and sleep neurons and synapse are added to areas requiring higher densities for better pattern matching, etc.

Resting frontal cortex - The machine is ‘sleeping’ but the high level networks driven by circadian rhythms are generating patterns throughout the whole cortex.  These patterns consist of fragments of knowledge and experiences as remembered by the system through its own senses.  Each pixel = one neuron.

And just for kicks a fly through of a connectome. The editor allows me to move through the system to trace and edit neuron/ synapse properties in real time... and its fun.

Phew! Ok that gives a very rough history of progress. There are a few more vids on my Youtube pages.

Edit: Oh yeah my definition of consciousness.

The beauty is that the emergent connectome defines both the structural hardware and the software.  The brain is more like a clockwork watch or a Babbage engine than a modern computer.  The design of a cog defines its functionality.  Data is not passed around within a watch, there is no software; but complex calculations are still achieved.  Each module does a specific job, and only when working as a whole can the full and correct function be realised. (Clockwork Intelligence: Korrelan 1998)

In my AGI model experiences and knowledge are broken down into their base constituent facets and stored in specific areas of cortex self organised by their properties. As the cortex learns and develops there is usually just one small area of cortex that will respond/ recognise one facet of the current experience frame.  Areas of cortex arise covering complex concepts at various resolutions and eventually all elements of experiences are covered by specific areas, similar to the alphabet encoding all words with just 26 letters.  It’s the recombining of these millions of areas that produce/ recognise an experience or knowledge.

Through experience areas arise that even encode/ include the temporal aspects of an experience, just because a temporal element was present in the experience as well as the order sequence the temporal elements where received in.

Low level low frequency circadian rhythm networks govern the overall activity (top down) like the conductor of an orchestra.  Mid range frequency networks supply attention points/ areas where common parts of patterns clash on the cortex surface. These attention areas are basically the culmination of the system recognising similar temporal sequences in the incoming/ internal data streams or in its frames of ‘thought’, at the simplest level they help guide the overall ‘mental’ pattern (sub conscious); at the highest level they force the machine to focus on a particular salient ‘thought’.

So everything coming into the system is mapped and learned by both the physical and temporal aspects of the experience.  As you can imagine there is no limit to the possible number of combinations that can form from the areas representing learned facets.

I have a schema for prediction in place so the system recognises ‘thought’ frames and then predicts which frame should come next according to what it’s experienced in the past.  

I think consciousness is the overall ‘thought’ pattern phasing from one state of situation awareness to the next, guided by both the overall internal ‘personality’ pattern or ‘state of mind’ and the incoming sensory streams.  

I’ll use this thread to post new videos and progress reports as I slowly bring the system together.  

65 Comments | Started June 18, 2016, 08:59:04 PM


mini a.i puzzles in General AI Discussion

this thread will be about mini a.i puzzles, how way the brain solves problems and paradoxes.

1st puzzle : is sacrifice able
if you have old very used shoes you don't care if it is raining if you go to work with them
if you use them as brakes will biking BUT if they are new and expensive you would be
what makes the brain classify an object as high value, and what make it be extra careful for it ?

157 Comments | Started April 26, 2016, 06:00:33 PM


The Machine in AI in Film and Literature.

Hell hath no fury like a woman scorned but what if that woman is also an advanced AI housed in the perfect body of an android??

15 Comments | Started January 10, 2014, 12:23:29 PM


Why watching Westworld’s robots should make us question ourselves in Robotics News

Why watching Westworld’s robots should make us question ourselves
21 October 2016, 5:54 pm


For a sci-fi fan like me, fascinated by the nature of human intelligence and the possibility of building life-like robots, it’s always interesting to find a new angle on these questions. As a re-imagining of the original 1970s science fiction film set in a cowboy-themed, hyper-real adult theme park populated by robots that look and act like people, Westworld does not disappoint.

Westworld challenges us to consider the difference between being human and being a robot. From the beginning of this new serialisation on HBO we are confronted with scenes of graphic human-on-robot violence. But the robots in Westworld have more than just human-like physical bodies, they display emotion including extreme pain, they see and recognise each other’s suffering, they bleed and even die. What makes this acceptable, at least within Westworld’s narrative, is that they are just extremely life-like human simulations; while their behaviour is realistically automated, there is “nobody home”.

But from the start, this notion that a machine of such complexity is still merely a machine is undermined by constant reminders that they are also so much like us. The disturbing message, echoing that of previous sci-fi classics such as Blade Runner and AI, is that machines could one day be so close to human as to be indistinguishable – not just in intellect and appearance, but also in moral terms.

At the same time, by presenting an alternate view of the human condition through the technological mirror of life-like robots, Westworld causes us to reflect that we are perhaps also just sophisticated machines, albeit of a biological kind – an idea that has been forcefully argued by the philosopher Daniel Dennett.

The unfortunate robots in Westworld have, at least initially, no insight into their existential plight. They enter into each new day programmed with enthusiasm and hope, oblivious to its pre-scripted violence and tragedy. We may pity these automatons their fate – but how closely does this blinkered ignorance, belief in convenient fictions, and misguided presumption of agency resemble our own human predicament?

Westworld arrives at a time when people are already worried about the real-world impact of advances in robotics and artificial intelligence. Physicist Stephen Hawking and technologist Elon Musk are among the powerful and respected voices to have expressed concern about allowing the AI genie to escape the bottle. Westworld’s contribution to the expanding canon of science fiction dystopias will do nothing to quell such fears. Channelling Shakespeare’s King Lear, a malfunctioning robot warns us in chilling terms: “I shall have such revenges on you both. The things I will do, what they are, yet I know not. But they will be the terrors of the Earth.”

But against these voices are other distinguished experts trying to quell the panic. For Noam Chomsky, the intellectual godfather of modern AI, all talk of matching human intelligence in the foreseeable future remains fiction, not science. One of the world’s best-known roboticists, Rodney Brooks has called on us to relax: AI is just a tool, not a threat.

As a neuroscientist and roboticist, I agree that we are far from being able to replicate human intelligence in robot form. Our current systems are too simple, probably by several orders of magnitude. Building human-level AI is extremely hard; as Brooks says, we are just at the beginning of a very long road. But I see the path along which we are developing AI as one of symbiosis, in which we can use robots to benefit society and exploit advances in artificial intelligence to boost our own biological intelligence.

More than just a tool Nevertheless, in recent years the robots and AI are “just tools” line of argument has begun to frustrate me. Partly because it has failed to calm the disquiet around AI, and partly because there are good reasons why these technologies are different from others in the past.

Even if robots are just tools, people will see them as more than that. It seems natural for people to respond to robots – even some of the more simple, non-human robots we have today – as though they have goals and intentions. It may be an innate tendency of our profoundly social human minds to see entities that act intelligently in this way. More importantly, people may see robots as having psychological properties such as the ability to experience suffering.

Being human is about more than just looking the part.Being human is about more than just looking the part. It may be difficult to persuade them to see otherwise, particularly if we continue to make robots more life-like. If so, we may have to adapt our ethical frameworks to take this into account. For instance, we might consider violence towards a robot as wrong, even though the suffering is imagined rather than real. Indeed, faced with violence towards a robot some people show this sort of ethical response spontaneously. We will have to deal with these issues as we learn to live with robots.

As AI and robot technology becomes more complex, robots may come to have interesting psychological properties that make them more than just tools. The fictional robots of Westworld are clearly in this category, but already real robots are being developed that have artificial drives and motivations, that are aware of their own bodies as distinct from the rest of the world, that are equipped with internal models of themselves and others as social entities, and that are able to think about their own past and future.

These are not properties that we find in drills and screwdrivers. They are embryonic psychological capacities that, so far, have only been found in living, sentient entities such as humans and animals. Stories such as Westworld remind us that as we progress toward ever more sophisticated AI, we should consider that this path might lead us both to machines that are more like us, and to seeing ourselves as more like machines.

This article was originally posted on The Conversation

If you enjoyed this article, you might also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started October 21, 2016, 10:48:15 PM


Swarms of precision agriculture robots could help put food on the table in Robotics News

Swarms of precision agriculture robots could help put food on the table
21 October 2016, 4:16 pm

Saga Agriculture Robots

Swarms of drones will help farmers map weeds in their fields and improve crop yields. This is the promise of an ECHORD++ funded research project called ‘SAGA: Swarm Robotics for Agricultural Applications’. The project will deliver a swarm of drones programmed to monitor a field and, via on-board machine vision, precisely map the presence of weeds among crops.

Additionally, the drones attract each another at weed-infested areas, allowing them to inspect only those areas accurately, similar to how swarms of bees forage the most profitable flower patches. In this way, the planning of weed control activities can be limited to high-priority areas, generating savings at the same time as increasing productivity.

“The application of swarm robotics to precision agriculture represents a paradigm shift with a tremendous potential impact” says Dr. Vito Trianni, SAGA project coordinator and researcher at the Institute of Cognitive Sciences and Technologies of the Italian National Research Council (ISTC-CNR). “As the price of robotics hardware lowers, and the miniaturization and abilities of robots increase, we will soon be able to automate solutions at the individual plant level,” says Dr. Trianni. “This needs to be accompanied by the ability to work in large groups, so as to efficiently cover big fields and work in synergy. Swarm robotics offers solutions to such a problem.” Miniature machines avoid soil compaction and can act only where needed; robots can adopt mechanical, as opposed to chemical, solutions suitable for organic farming; and robot swarms can be exactly scaled to fit different farm sizes. The Saga project proposes a recipe for precision farming consisting of novel hardware mixed with precise individual control and collective intelligence.

Saga Drone 2

In this particular case, innovative hardware solutions are provided by Avular B.V., a Dutch firm specializing in industrial level drones for monitoring and inspection. Individual control and machine vision are deployed thanks to the expertise of the Farm Technology Group at Wageningen University & Research, The Netherlands. Swarm intelligence is designed at the already mentioned ISTC-CNR, leveraging their expertise to design and analyse collective behaviours in artificial systems. For the next year, these organisations will team up to produce and field-test the first prototype for weed control based on swarm robotics research.

About SAGA

SAGA is funded by ECHORD++, a European project that wants to bring the excellence of robotics research “from lab to market”, through focused experiments in specific application domains, among which is precision agriculture. SAGA is a collaborative research project that involves: the Institute of Cognitive Sciences and Technologies (ISTC-CNR) of the Italian National Research Council (CNR), which provides expertise in swarm robotics applications and acts as the coordinator for SAGA’s activities; Wageningen University & Research (WUR), which provides expertise in the agricultural robotics and precision farming domains; and Avular B.V., a company specialised in drone solutions for industrial and agricultural applications.

Click here for more information

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started October 21, 2016, 04:48:31 PM
[Facebook Messenger] Soccer Fan Bot

[Facebook Messenger] Soccer Fan Bot in Chatbots - English

This is a Facebook Messenger bot called Soccer Fan Bot. It can do 3 things:

- Update you on the score of your team by typing "Update me on France" for example.

- Propose you 3 pictures of either a soccer player or player's wife and ask you to guess the one corresponding to the proposed name. Just write "guess player" or "guess wife".

- Give you a fact, just type "give me a fact".

Aug 17, 2016, 11:46:51 am
[Thai] BE (Buddhist Era)

[Thai] BE (Buddhist Era) in Chatbots - Non English

Be has been made with the program-o engine. Almost all knowledge is about Thailand and Thai people. She speaks only Thai language.

Aug 17, 2016, 11:38:54 am
The World's End

The World's End in Robots in Movies

The World's End is a 2013 British comic science fiction film directed by Edgar Wright, written by Wright and Simon Pegg, and starring Pegg, Nick Frost, Paddy Considine, Martin Freeman, Rosamund Pike and Eddie Marsan. The film follows a group of friends who discover an alien invasion during an epic pub crawl in their home town.

Gary King (Simon Pegg), a middle-aged alcoholic, tracks down his estranged schoolfriends and persuades them to complete "the Golden Mile", a pub crawl encompassing the 12 pubs of their hometown of Newton Haven. The group had previously attempted the crawl as teenagers in 1990 but failed to reach the final pub, The World's End.

Gary picks a fight with a teenager and knocks his head off, exposing a blue blood-like liquid and subsequently exposing him as an alien android. Gary's friends join him and fight more androids, whom they refer to as "blanks" to disguise what they are talking about.

May 31, 2016, 09:28:32 am
Botwiki.org Monthly Bot Challenge

Botwiki.org Monthly Bot Challenge in Websites

Botwiki.org is a site for showcasing friendly, useful, artistic online bots, and our Monthly Bot Challenge is a recurring community event dedicated to making these kinds of bots.

Feb 25, 2016, 19:46:54 pm
From Movies to Reality: How Robots Are Revolutionizing Our World

From Movies to Reality: How Robots Are Revolutionizing Our World in Articles

Robots were once upon a time just a work of human imagination. Found only in books and movies, not once did we think a time would come where we would be able to interact with robots in real world. Eventually, in fact rapidly, the innovations we only dreamt of are now becoming a reality. Quoting the great Stephen Hawking "This is a glorious time to be alive for scientists". It is indeed the best time for the technology has become more and more sophisticated that its growing power might even endanger humanity.

Jan 26, 2016, 10:12:00 am

Uncanny in Robots in Movies

Uncanny is a 2015 American science fiction film directed by Matthew Leutwyler and based on a screenplay by Shahin Chandrasoma. It is about the world's first "perfect" artificial intelligence (David Clayton Rogers) that begins to exhibit startling and unnerving emergent behavior when a reporter (Lucy Griffiths) begins a relationship with the scientist (Mark Webber) who created it.

Jan 20, 2016, 13:09:41 pm
AI Virtual Pets

AI Virtual Pets in Other

Artificial life also called Alife is simply the simulation of any aspect of life, as through computers, robotics, or biochemistry. (taken from the Free dictionary)This site focus's on the software aspect of it.

Oct 03, 2015, 09:21:09 am
Why did HAL sing ‘Daisy’?

Why did HAL sing ‘Daisy’? in Articles

...a burning question posed by most people who have watched or read “2001: A Space Odyssey”: that is, why does the computer HAL-9000 sing the song ‘Daisy Bell’ as the astronaut Dave Bowman takes him apart?

Sep 04, 2015, 09:28:55 am

Humans in Robots on TV

Humans is a British-American science fiction television series. Written by the British team Sam Vincent and Jonathan Brackley, based on the award-winning Swedish science fiction drama Real Humans, the series explores the emotional impact of the blurring of the lines between humans and machines.

Aug 28, 2015, 09:13:37 am

Ariane in Chatbots - English

'My name is Ariane, my hobbies include blogging, exploring, and online gaming.'

Ariane is a Pandorabot (Annotated A.L.I.C.E. AIML - September 2003) with adult leanings and extensive modification. While many known corrections remain unchanged, many have been altered and offer surprising responses.

At the center, Ariane is a chatbot, but she's also a project that blossoms out in other directions in ways I haven't seen since first discovering the world of Square Bear.

If you ask about her writings, she'll tell you about her blog, or if you ask about pictures, she'll show you her modeling portfolio comprised of PG-13 CG images. She's also involved in Second Life, has a MySpace page, and at her personal web page there are links to other pages not mentioned above.

Mar 20, 2010, 15:21:58 pm