Rivescript not working as expected- solved in AI Programming

Does anyone use rivescript ? I have it on a local server and it all loads in a browser and working (well nearly)
The problem I have is adding new stuff
I added the following how questions...
+  how do you * the *
-  Well, I like to <star1> the <star2>. How? I cant really say

+  how does the * work
-  I think the <star> works with the help of magic squirrels

+  how does *
-  interesting question. Are you teasing me?
+  no
-  Ok then. I can be a bit dipsy sometimes.

+  how does it *
-  Maybe that is beyond the scope of this conversation

+  how * will that be
-  That depends on how <star> it is to begin with

+  how will I *
-  trust in your instincts
-  there will be no way of knowing for sure

+  how do you like *
-  I like <star> a lot actually
Regardless of things making sense, most of it works. BUT, the " how will I * " results in rivescript not finding the answer
Does anyone know where I am going wrong in the above?

19 Comments | Started March 18, 2017, 04:08:32 AM


If an AI got the task to do whatever it wants, what would it be likely to do? in General AI Discussion

I just had this question ("If an AI got the task to do whatever it wants, what would it be likely to do?") and wanted a realistic answer, I know that it is highly unlikely to predict what a machine would do but yeah it just came up my head so feel free to ask yourself the same question and leave a comment :)

2 Comments | Started March 28, 2017, 03:37:39 PM


A.eye in General AI Discussion

this thread is about artificial eyes for A.I what is needed and related stuff

starting :

motion detection
object recognition
object counter
color scheme getter
*face recog
grid recognition and defining

what else ?

32 Comments | Started July 10, 2016, 04:18:43 PM


Most Human Like Android Built to Date in AI News

ERICA: The ERATO Intelligent Conversational Android
developed by Hiroshi Ishiguro Laboratories at ATR, Kyoto, Japan

Quote, "an autonomous android system capable of conversational interaction, featuring advanced sensing and speech synthesis technologies, and arguably the most human like android built to date."

The Institute of Electrical and Electronics Engineers (IEEE)
Citation: http://ieeexplore.ieee.org/document/7745086/?reload=true

Demo in Japanese:

Demo in English:

3 Comments | Started March 25, 2017, 09:18:00 PM


Security for multirobot systems in Robotics News

Security for multirobot systems
17 March 2017, 12:30 pm

Researchers including MIT professor Daniela Rus (left) and research scientist Stephanie Gil (right) have developed a technique for preventing malicious hackers from commandeering robot teams’ communication networks. To verify the theoretical predictions, the researchers implemented their system using a battery of distributed Wi-Fi transmitters and an autonomous helicopter. Image: M. Scott Brauer. Distributed planning, communication, and control algorithms for autonomous robots make up a major area of research in computer science. But in the literature on multirobot systems, security has gotten relatively short shrift.

In the latest issue of the journal Autonomous Robots, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and their colleagues present a new technique for preventing malicious hackers from commandeering robot teams’ communication networks. The technique could provide an added layer of security in systems that encrypt communications, or an alternative in circumstances in which encryption is impractical.

“The robotics community has focused on making multirobot systems autonomous and increasingly more capable by developing the science of autonomy. In some sense we have not done enough about systems-level issues like cybersecurity and privacy,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and senior author on the new paper.

“But when we deploy multirobot systems in real applications, we expose them to all the issues that current computer systems are exposed to,” she adds. “If you take over a computer system, you can make it release private data — and you can do a lot of other bad things. A cybersecurity attack on a robot has all the perils of attacks on computer systems, plus the robot could be controlled to take potentially damaging action in the physical world. So in some sense there is even more urgency that we think about this problem.”

Identity theft

Most planning algorithms in multirobot systems rely on some kind of voting procedure to determine a course of action. Each robot makes a recommendation based on its own limited, local observations, and the recommendations are aggregated to yield a final decision.

A natural way for a hacker to infiltrate a multirobot system would be to impersonate a large number of robots on the network and cast enough spurious votes to tip the collective decision, a technique called “spoofing.” The researchers’ new system analyzes the distinctive ways in which robots’ wireless transmissions interact with the environment, to assign each of them its own radio “fingerprint.” If the system identifies multiple votes as coming from the same transmitter, it can discount them as probably fraudulent.

“There are two ways to think of it,” says Stephanie Gil, a research scientist in Rus’ Distributed Robotics Lab and a co-author on the new paper. “In some cases cryptography is too difficult to implement in a decentralized form. Perhaps you just don’t have that central key authority that you can secure, and you have agents continually entering or exiting the network, so that a key-passing scheme becomes much more challenging to implement. In that case, we can still provide protection.

“And in case you can implement a cryptographic scheme, then if one of the agents with the key gets compromised, we can still provide  protection by mitigating and even quantifying the maximum amount of damage that can be done by the adversary.”

Hold your ground

In their paper, the researchers consider a problem known as “coverage,” in which robots position themselves to distribute some service across a geographic area — communication links, monitoring, or the like. In this case, each robot’s “vote” is simply its report of its position, which the other robots use to determine their own.

The paper includes a theoretical analysis that compares the results of a common coverage algorithm under normal circumstances and the results produced when the new system is actively thwarting a spoofing attack. Even when 75 percent of the robots in the system have been infiltrated by such an attack, the robots’ positions are within 3 centimeters of what they should be. To verify the theoretical predictions, the researchers also implemented their system using a battery of distributed Wi-Fi transmitters and an autonomous helicopter.

“This generalizes naturally to other types of algorithms beyond coverage,” Rus says.

The new system grew out of an earlier project involving Rus, Gil, Dina Katabi — who is the other Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT — and Swarun Kumar, who earned master’s and doctoral degrees at MIT before moving to Carnegie Mellon University. That project sought to use Wi-Fi signals to determine transmitters’ locations and to repair ad hoc communication networks. On the new paper, the same quartet of researchers is joined by MIT Lincoln Laboratory’s Mark Mazumder.

Typically, radio-based location determination requires an array of receiving antennas. A radio signal traveling through the air reaches each of the antennas at a slightly different time, a difference that shows up in the phase of the received signals, or the alignment of the crests and troughs of their electromagnetic waves. From this phase information, it’s possible to determine the direction from which the signal arrived.

Space vs. time

A bank of antennas, however, is too bulky for an autonomous helicopter to ferry around. The MIT researchers found a way to make accurate location measurements using only two antennas, spaced about 8 inches apart. Those antennas must move through space in order to simulate measurements from multiple antennas. That’s a requirement that autonomous robots meet easily. In the experiments reported in the new paper, for instance, the autonomous helicopter hovered in place and rotated around its axis in order to make its measurements.

When a Wi-Fi transmitter broadcasts a signal, some of it travels in a direct path toward the receiver, but much of it bounces off of obstacles in the environment, arriving at the receiver from different directions. For location determination, that’s a problem, but for radio fingerprinting, it’s an advantage: The different energies of signals arriving from different directions give each transmitter a distinctive profile.

There’s still some room for error in the receiver’s measurements, however, so the researchers’ new system doesn’t completely ignore probably fraudulent transmissions. Instead, it discounts them in proportion to its certainty that they have the same source. The new paper’s theoretical analysis shows that, for a range of reasonable assumptions about measurement ambiguities, the system will thwart spoofing attacks without unduly punishing valid transmissions that happen to have similar fingerprints.

“The work has important implications, as many systems of this type are on the horizon — networked autonomous driving cars, Amazon delivery drones, et cetera,” says David Hsu, a professor of computer science at the National University of Singapore. “Security would be a major issue for such systems, even more so than today’s networked computers. This solution is creative and departs completely from traditional defense mechanisms.”

If you enjoyed this article from CSAIL, you might also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started March 27, 2017, 10:48:51 PM


Collaborating machines and avoiding soil compression in Robotics News

Collaborating machines and avoiding soil compression
17 March 2017, 10:00 am

Image: Swarmfarm Soil compression can be a serious problem, but it isn’t always, or in all ways, a bad thing. For example, impressions made by hoofed animals, so long as they only cover a minor fraction of the soil surface, create spaces in which water can accumulate and help it percolate into the soil more effectively, avoiding erosion runoff.

The linear depressions made by wheels rolling across the surface are more problematic because they create channels that can accelerate the concentration of what would otherwise be evenly distributed rainfall, turning it into a destructive force. This is far less serious when those wheels follow the contour of the land rather than running up and down slopes.

Taking this one step further, if it is possible for wheeled machines to always follow the same tracks, the compression is localized and the majority of the land area remains unaffected. If those tracks are filled with some material though which water can percolate but which impedes the accumulation of energy in downhill flows, the damage is limited to the sacrifice of the portion of the overall land area dedicated to those tracks and the creation of compression zones beneath them, which may result in boggy conditions on the uphill sides of the tracks, which may or may not be a bad thing, depending on what one is trying to grow there.

Source: vinbot.eu (I should note at this point that such tracks, when they run on the contour, are reminiscent of the ‘swales’ used in permaculture and regenerative agriculture.)

Tractors with GPS guidance are capable of running their wheels over the same tracks with each pass, but the need for traction, so they can apply towing force to implements running through the soil, means that those tracks will constitute a significant percentage of the overall area. Machines, such as dedicated sprayers, with narrower wheels that can be spread more widely apart, create tracks which occupy far less of the total land area, but they are not built for traction, and using them in place of tractors for all field operations would require a very different approach to farming.

It is possible to get away from machine-caused soil compression altogether, using either aerial machines (drones) or machines which are supported by or suspended from fixed structures, like posts or rails.

Small drones are much like hummingbirds in that they create little disturbance, but they are also limited in the types of operations they can perform by their inability to carry much weight or exert significant force. They’re fine for pollination but you wouldn’t be able to use them to uproot weeds with tenacious roots or to harvest watermelons or pumpkins.

On the other hand, fixed structures and the machines that are supported by or suspended from them have a significant up-front cost. In the case of equipment suspended from beams or gantries spanning between rails and supported from wheeled trucks which are themselves supported by rails, there is a tradeoff between the spacing of the rails and the strength/stiffness required in the gantry. Center-pivot arrangements also have such a tradeoff, but they use a central pivot in place of one rail (or wheel track), and it’s common for them to have several points of support spaced along the beam, requiring several concentric rails or wheel tracks.

Strictly speaking, there’s no particular advantage in having rail-based systems follow the contour of the land since they leave no tracks at all. Center-pivot systems using wheels that run directly on the soil rather than rail are best used on nearly flat ground since their round tracks necessarily run downhill over part of their circumference. In any rail-based system, the “rail” might be part of the mobile unit rather than part of the fixed infrastructure, drawing support from posts spaced closely enough that there were always at least two beneath it. However, this would preclude using trough-shaped rails to deliver water for irrigation.

Since the time of expensive machines is precious, it’s best to avoid burdening them with operations that can be handled by small, inexpensive drones, and the ideal arrangement is probably a combination of small drones, a smaller number of larger drones with some carrying capacity, light on-ground devices that put little pressure on the soil, and more substantial machines supported or suspended from fixed infrastructure, whether rail, center-pivot, or something else. Livestock (chickens, for example), outfitted with light wearable devices, might also be part of the mix.

The small drones, being more numerous, will be the best source of raw data, which can be used to optimize the operation of the larger drones, on-ground devices, and the machines mounted on fixed infrastructure, although too much centralized control would not be efficient. Each device should be capable of continuing to do useful work even when it loses network connection, and peer-to-peer connections will be more appropriate than running everything through a central hub in some circumstances.

Bonirob, an agricultural robot. Source: Bosch  

This is essentially a problem in complex swarm engineering, complex because of the variety of devices involved. Solving it in a way that creates a multi-device platform capable of following rules, carrying out plans, and recognizing anomalous conditions is the all-important first step in enabling the kind of robotics that can then go one to enable regenerative practices in farming (and land management in general).

If you enjoyed this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.


Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started March 27, 2017, 04:48:50 PM


Envisioning the future of robotics in Robotics News

Envisioning the future of robotics
16 March 2017, 10:07 am

Image: Ryan Etter Robotics is said to be the next technological revolution. Many seem to agree that robots will have a tremendous impact over the following years, and some are heavily betting on it. Companies are investing billions buying other companies, and public authorities are discussing legal frameworks to enable a coherent growth of robotics.

Understanding where the field of robotics is heading is more than mere guesswork. While much public concern focuses on the potential societal issues that will arise with the advent of robots, in this article, we present a review of some of the most relevant milestones that happened in robotics over the last decades. We also offer our insights on feasible technologies we might expect in the near future.

Copyright © Acutronic Robotics 2017. All Rights Reserved.  Pre-robots and first manipulators

What’s the origin of robots? To figure it out we’ll need to go back quite a few decades to when different conflicts motivated the technological growth that eventually enabled companies to build the first digitally controlled mechanical arms. One of the first and well documented robots was UNIMATE (considered by many the first industrial robot): a programmable machine funded by General Motors, used to create a production line with only robots. UNIMATE helped improve industrial production at the time. This motivated other companies and research centers to actively dedicate resources to robotics, which boosted growth in the field.

Sensorized robots

Sensors were not typically included in robots until the 70’s. Starting in1968, a second generation of robots emerged that integrated sensors. These robots were able to react to their environment and offer responses that met varying scenarios.

Relevant investments were observed during this period. Industrial players worldwide were attracted by the advantage that robots promised.

Worldwide industrial robots:  Era of the robots

Many consider that the Era of Robots started in 1980. Billions of dollars were invested by companies all around to world to automate basic tasks in their assembly lines. Sales of industrial robots grew 80% above the previous years’.

Key technologies appeared within these years: General internet access was extended in 1980; Ethernet became a standard in 1983 (IEEE 802.3); the Linux kernel was announced in 1991; and soon after that real-time patches started appearing on top of Linux.

The robots created between 1980 and 1999 belong to what we call the third generation of robots: robots that were re-programmable and included dedicated controllers. Robots populated many industrial sectors and were used for a wide variety of activities: painting, soldering, moving, assembling, etc.

By the end of the 90s, companies started thinking about robots beyond the industrial sphere. Several companies created promising concepts that would inspire future roboticists. Among the robots created within this period, we highlight two:

  • The first LEGO Mindstorms kit (1998): a set consisting of 717 pieces including LEGO bricks, motors, gears, different sensors, and a RCX Brick with an embedded microprocessor to construct various robots using the exact same parts. The kit allowed the learning of  basic robotics principles. Creative projects have appeared over the years showing the potential of interchangeable hardware in robotics. Within a few years. the LEGO Mindstorms kit became the most successful project that involved robot part interchangeability.
  • Sony’s AIBO (1999): the world’s first entertainment robot. Widely used for research and development, Sony offered robotics to everyone in the form of a $1,500 robot that included a distributed hardware and software architecture. The OPEN-R architecture involved the use of modular hardware components — e.g. appendages that can be easily removed and replaced to customize the shape and function of the robots — and modular software components that could be interchanged to modify their behavior and movement patterns. OPEN-R inspired future robotic frameworks, and minimized the need for programming individual movements or responses.
Integration effort was identified as one of the main issues within robotics, particularly related to industrial robots. A common infrastructure typically reduces the integration effort by facilitating an environment in which components can be connected and made to interoperate. Each of the infrastructure-supported components are optimized for such integration at their conception, and the infrastructure handles the integration effort. At that point, components could come from different manufacturers (yet when supported by a common infrastructure, they will interoperate).

Sony’s AIBO and LEGO’s Mindstorms kit were built upon this principle, and both represented common infrastructures. Even though they came from the consumer side of robotics, one could argue that their success was strongly related to the fact that both products made use of interchangeable hardware and software modules. The use of a common infrastructure proved to be one of the key advantages of these technologies, however those concepts were never translated to industrial environments. Instead, each manufacturer, in an attempt to dominate the market, started creating their own “robot programming languages”.

The dawn of smart robots

Starting from the year 2000, we observed a new generation of robot technologies. The so-called fourth generation of robots consisted of more intelligent robots that included advanced computers to reason and learn (to some extend at least), and more sophisticated sensors that helped controllers adapt themselves more effectively to different circumstances.

Among the technologies that appeared in this period, we highlight the Player Project (2000, formerly the Player/Stage Project), the Gazebo simulator (2004) and the Robot Operating System (2007). Moreover, relevant hardware platforms appeared during these years. Single Board Computers (SBCs), like the Raspberry Pi, enabled millions of users all around the world to create robots easily.

The boost of bio-inspired artificial intelligence

The increasing popularity of artificial intelligence, and particularly neural networks, became relevant in this period as well. While a lot of the important work on neural networks happened in the 80’s and in the 90’s, computers did not have enough computational power at the time. Datasets weren’t big enough to be useful in practical applications. As a result, neural networks practically disappeared in the first decade of the 21st century. However, starting from 2009 (speech recognition), neural networks gained popularity and started delivering good results in fields such as computer vision (2012) or machine translation (2014). Over the last few years, we’ve seen how these techniques have been translated to robotics for tasks such as robotic grasping. In the coming years, we expect to see these AI techniques having more and more impact in robotics.

What happened to industrial robots?

Relevant key technologies have also emerged from the industrial robotics landscape (e.g.: EtherCAT). However, except for the appearance of the first so-called collaborative robots, the progress within the field of industrial robotics has significantly slowed down when compared to previous decades. Several groups have identified this fact and written about it with conflicting opinions. Below, we summarize some of the most relevant points encountered while reviewing previous work:

  • The Industrial robot industry :  is it only a supplier industry?

    For some, the industrial robot industry is a supplier industry. It supplies components and systems to larger industries, like manufacturing. These groups argue that the manufacturing industry is dominated by the PLC, motion control and communication suppliers which, together with the big customers, are setting the standards. Industrial robots therefore need to adapt and speak factory languages (PROFINET, ETHERCAT, Modbus TCP, Ethernet/IP, CANOPEN, DEVICENET, etc.) which for each factory, might be different.
  • Lack of collaboration and standardized interfaces in industry

    To date, each industrial robot manufacturer’s business model is somehow about locking you into their system and controllers. Typically, one will encounter the following facts when working with an industrial robot: a) each robot company has its own proprietary programming language, b) programs can’t be ported from one robot company to the next one, c) communication protocols are different, d) logical, mechanical and electrical interfaces are not standardized across the industry. As a result, most robotic peripheral makers suffer from having to support many different protocols, which requires a lot of development time that reduces the functionality of the product.
  • Competing by obscuring vs opening new markets?

    The closed attitude of most industrial robot companies is typically justified by the existing competition. Such an attitude leads to a lack of understanding between different manufacturers. An interesting approach would be to have manufacturers agree on a common infrastructure. Such an infrastructure could define a set of electrical and logical interfaces (leaving the mechanical ones aside due to the variability of robots in different industries) that would allow industrial robot companies to produce robots and components that could interoperate, be exchanged and eventually enter into new markets. This would also lead to a competitive environment where manufacturers would need to demonstrate features, rather than the typical obscured environment where only some are allowed to participate.
 The Hardware Robot Operating System (H-ROS)

For robots to enter new and different fields, it seems reasonable that they need to adapt to the environment itself. This fact was previously highlighted for the industrial robotics case, where robots had to be fluent with factory languages. One could argue the same for service robots (e.g. households robots that will need to adapt to dish washers, washing machines, media servers, etc.), medical robots and many other areas of robotics. Such reasoning lead to the creation of the Hardware Robot Operating System (H-ROS), a vendor-agnostic hardware and software infrastructure for the creation of robot components that interoperate and can be exchanged between robots. H-ROS builds on top of ROS, which is used to define a set of standardized logical interfaces that each physical robot component must meet if compliant with H-ROS.

H-ROS facilitates a fast way of building robots, choosing the best component for each use-case from a common robot marketplace. It complies with different environments (industrial, professional, medical, …) where variables such as time constraints are critical. Building or extending robots is simplified to the point of placing H-ROS compliant components together. The user simply needs to program the cognition part (i.e. brain) of the robot and develop their own use-cases, all without facing the complexity of integrating different technologies and hardware interfaces.

The future ahead

With latest AI results being translated to robotics, and recent investments in the field, there’s a high anticipation for the near future of robotics.

As nicely introduced by Melonee Wise in a recent interview, there’s still not that many things you can do with a $1000-5000 BOM robot (which is what most people would pay on an individual basis for a robot). Hardware is still a limiting factor, and our team strongly believes that a common infrastructure, such as H-ROS, will facilitate an environment where robot hardware and software can evolve.

The list presented below summarizes, according to our judgement, some of the most technically feasible future robotic technologies to appear.


This review was funded and supported by Acutronic Robotics, a firm focused on the development of next-generation robot solutions for a range of clients.

The authors would also like to thank the Erle Robotics and the Acutronic groups for their support and help.


  • [1] Gates, B. ”A robot in every home,” Scientific American, 296(1), 2007, pp. 58–65. (link)
  • [2] Trikha, B. “ A Journey from floppy disk to cloud storage,” in International Journal on Computer Science and Engineering,Vol. 2, 2010, pp.1449–1452. (link)
  • [3] Copeland, B. J. “Colossus: its origins and originators,” in IEEE Annals of the History of Computing, Vol. 26, 2004, pp. 38–45. (link)
  • [4] Bondyopadhyay, P. K. “In the beginning [junction transistor],” in Proceedings of the IEEE, Vol. 86, 1998, pp.63–77. (link)
  • [5] Bryson, A. E. “Optimal control-1950 to 1985,” in IEEE Control Systems, Vol. 16, 1996, pp.26–33. (link)
  • [6] Middleditch, A. E. “Survey of numerical controller technology,” in Production Automation Project, University of Rochester, 1973. (link)
  • [7] Acal, A. P., & Lobera, A. S. “Virtual reality simulation applied to a numerical control milling machine,” in International Journal on Interactive Design and Manufacturing, Vol.1, 2007, pp.143–154. (link)
  • [8] Mark, M. “U.S. Patent №2,901,927,” Washington DC: U.S. Patent and Trademark Office, 1959 (link)
  • [9] Mickle, P. “A peep into the automated future,” in The capital century 1900–1999. http://www. capitalcentury. com/1961. html, 1961. (link)
  • [10] Kilby, J. S. (1976). Invention of the integrated circuit. IEEE Transactions on electron devices, 23(7), 648–654. (link)
  • [11] Giralt, G., Chatila, R., & Vaisset, M. “An integrated navigation and motion control system for autonomous multisensory mobile robots,” in Autonomous robot vehicles, 1990, pp.420–443. (link)
  • [12] Bryan, L. A., & Bryan, E. A. “Programmable controllers,” 1988. (link)
  • [13] Wade, J. “Dynamics of organizational communities and technological bandwagons: An empirical investigation of community evolution in the microprocessor market,” in Strategic Management Journal, Vol.16, 1995, pp.111–133. (link)
  • [14] Wallén, J. “The history of the industrial robot,” in Linköping University Electronic Press, 2008. (link)
  • [15] Paul, R. P., “WAVE: A Model Based Language for Manipulator Control,” in The Industrial Robot, Vol. 4, 1977, pp.10–17. (link)
  • [16] Shepherd, S., & Buchstab, A. “Kuka robots on-site,” in Robotic Fabrication in Architecture, Art and Design 2014, 2014, pp. 373–380. (link)
  • [17] Cutkosky, M. R., & Wright, P. K. (1982). Position Sensing Wrists for Industrial Manipulators (No. CMU-RI-TR-82–9). CARNEGIE-MELLON UNIV PITTSBURGH PA ROBOTICS INST. (link)
  • [18] Finkel, R., Taylor, R., Bolles, Paul, R. and Feldman, J., “An Overview of AL, A Programming System for Automation,” in Proceedings -Fourth International Joint Conference on Artificial Intelligence, June 1975, pp.758–765. (link)
  • [19] Park, J., & Kim, G. J. “Robots with projectors: an alternative to anthropomorphic HRI,” in Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, March 2009, pp. 221–222 (link)
  • [20] Srihari, K., & Deisenroth, M. P. (1988). Robot Programming Languages — A State of the Art Survey. In Robotics and Factories of the Future’87 (pp. 625–635). Springer Berlin Heidelberg. (link)
  • [21] Gruver, W. A., Soroka, B. J., Craig, J. J. and Turner, T. L., “Industrial Robot Programming Languages: A Comparative Evaluation,” in IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-14, №4, July/August 1984, pp. 565–570. (link)
  • [22] Maeda, J. (2005). Current research and development and approach to future automated construction in Japan. In Construction Research Congress 2005: Broadening Perspectives (pp. 1–11). (link)
  • [23] Castells, M. “The Internet galaxy: Reflections on the Internet, business, and society,” in Oxford University Press on Demand, 2002. (link)
  • [24] Beckhoof “25 Years of PC Control,” 2011. (link)
  • [25] Shuang Yu “IEEE 802.3 ‘Standard for Ethernet’ Marks 30 Years of Innovation and Global Market Growth,” Press release IEEE, June 24, 2013. Retrieved January 11, 2014. (link)
  • [26] Brooks, R. “New approaches to robotics,”in Science, 253(5025), 1991, 1227–1232. (link)
  • [27] World Heritage Encyclopedia, “International Federation of Robotics” in World Heritage Encyclopedia (link)
  • [28] Lapham, J. “ RobotScript™: the introduction of a universal robot programming language,” Industrial Robot: An International Journal, 26(1),1999, pp. 17–25 (link)
  • [29]García Marín, J. A. “New concepts in automation and robotic technology for surface engineering,” 2010. (link)
  • [30] Walter A Aviles, Robin T Laird, and Margaret E Myers. “Towards a modular robotic architecture,” in 1988 Robotics Conferences. International Society for Optics and Photonics, 1989, pp. 271–278 (link)
  • [31] Angle, C. “Genghis, a six legged autonomous walking robot,” Doctoral dissertation, Massachusetts Institute of Technology, 1989. (link)
  • [32] Bovet, D. P., & Cesati, M. “Understanding the Linux Kernel: from I/O ports to process management,” in O’Reilly Media, Inc.” 2005. (link)
  • [33] Alpert, D., & Avnon, D. “Architecture of the Pentium microprocessor,” in IEEE micro, Vol. 13, 1993, pp.11–21. (link)
  • [34] Hollingum, J. “ABB focus on lean robotization,” in Industrial Robot: An International Journal, Vol. 21, 1994, pp.15–16. (link)
  • [35] Barabanov, M. “A Linux Based Real-Time Operating System,” 1996. (link)
  • [36] Yodaiken, V. “Cheap Operating systems Research,” in Published in the Proceedings of the First Conference on Freely Redistributable Systems, Cambridge MA, 1996 (link)
  • [37] Decotignie, J. D. “Ethernet-based real-time and industrial communications,” in Proceedings of the IEEE, Vol. 93, 2005, pp.1102–1117. (link)
  • [38] Wade, S., Dunnigan, M. W., & Williams, B. W. “Modeling and simulation of induction machine vector control with rotor resistance identification,” in IEEE transactions on power electronics, Vol. 12, 1997, pp.495–506. (link)
  • [39] Campbell, M., Hoane, A. J., & Hsu, F. H. “Deep blue,” in Artificial intelligence, Vol. 134, 2002, pp.57–83. (link)
  • [40] Folkner, W. M., Yoder, C. F., Yuan, D. N., Standish, E. M., & Preston, R. A. “Interior structure and seasonal mass redistribution of Mars from radio tracking of Mars Pathfinder,” in Science 278(5344), 1997, pp.1749–1752. (link)
  • [41] Cliburn, D. C. “Experiences with the LEGO Mindstorms throughout the undergraduate computer science curriculum,”in Frontiers in Education Conference, 36th Annual,IEEE, October 2006, pp.1–6. (link)
  • [42] Rowe S., R Wagner C. “An introduction to the joint architecture for unmanned systems (JAUS),” in Ann Arbor 1001, 2008. (link)
  • [43] Fujita, M. “On activating human communications with pet-type robot AIBO,” in Proceedings of the IEEE, Vol. 92, 2004, pp.1804–1813.(link)
  • [44] Breazeal, C. L. “Sociable machines: Expressive social exchange between humans and robots,” in Doctoral dissertation, Massachusetts Institute of Technology, 2000. (link)
  • [45] Rafiei, M., Elmi, S. M., & Zare, A. “Wireless communication protocols for smart metering applications in power distribution networks,” in Electrical Power Distribution Networks (EPDC), 2012 Proceedings of 17th Conference on. IEEE, May 2012, pp. 1–5. (link)
  • [46] Brian Gerkey, Richard T Vaughan, and Andrew Howard. “The Player/Stage project: Tools for multi-robot and distributed sensor systems” in Proceedings of the 11th international conference on advanced robotics. Vol. 1. 2003, pp. 317–323. (link)
  • [47] Herman Bruyninckx. “Open robot control software: the OROCOS project” in Robotics and Automation, 2001. Proceedings 2001 icra. ieee International Conference on. Vol. 3. IEEE. 2001, pp. 2523–2528. (link)
  • [48] Hirose, M., & Ogawa, K. “Honda humanoid robots development,” in Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 365(1850), 2007, pp.11–19. (link)
  • [49] Mohr, F. W., Falk, V., Diegeler, A., Walther, T., Gummert, J. F., Bucerius, J., … & Autschbach, R. “Computer-enhanced “robotic” cardiac surgery: experience in 148 patients” in The Journal of thoracic and cardiovascular surgery, 121(5), 2001, pp.842–853. (link)
  • [50] Jones, J. L., Mack, N. E., Nugent, D. M., & Sandin, P. E. “U.S. Patent №6,883,201,” in Washington, DC: U.S. Patent and Trademark Office, 2005. (link)
  • [51] Jansen, D., & Buttner, H. “Real-time Ethernet: the EtherCAT solution,” in Computing and Control Engineering, 15(1), 2004, pp. 16–21. (link)
  • [52] Koenig, N., & Howard, A. “Design and use paradigms for gazebo, an open-source multi-robot simulator,” in Intelligent Robots and Systems, 2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on IEEE., Vol. 3, September 2004, pp. 2149–2154. (link)
  • [53] Cousins, S. “Willow garage retrospective [ros topics],” in IEEE Robotics & Automation Magazine, 21(1), 2014, pp.16–20. (link)
  • [54] Garage, W. Robot operating system. 2009. [Online]. (link)
  • [55] Fisher A. “Inside Google’s Quest To Popularize Self-Driving Cars,” in Popular Science, Bonnier Corporation, Retrieved 10 October 2013. (link)
  • [56] Cousins, S. “Ros on the pr2 [ros topics],” in IEEE Robotics & Automation Magazine, 17(3), 2010, pp.23–25. (link)
  • [57] PARROT, S. A. “Parrot AR. Drone,” 2010. (link)
  • [58] Honda Motor Co. ASIMO, 2011. [Online]. (link)
  • [59] Shen, F., Yu, H., Sakurai, K., & Hasegawa, O. “An incremental online semi-supervised active learning algorithm based on self-organizing incremental neural network,” in Neural Computing and Applications, 20(7), 2011, pp.1061–1074. (link)
  • [60] Industria 4.0 en la Feria de Hannover: La senda hacia la “fábrica inteligente” pasa por la Feria de Hannover, sitio digital ‘Deutschland’, 7 de abril de 2014 (link)
  • [61] Richardson, Matt, and Shawn Wallace “Getting started with raspberry PI,” in O’Reilly Media, Inc., 2012. (link)
  • [62] Edwards, S., & Lewis, C. “Ros-industrial: applying the robot operating system (ros) to industrial applications,” in IEEE Int. Conference on Robotics and Automation, ECHORD Workshop, May 2012. (link)
  • [63] Canis, B. Unmanned aircraft systems (UAS): Commercial outlook for a new industry. Congressional Research Service, Washington, 2015, p.8. (link)
  • [64] Trishan de Lanerolle, The Dronecode Foundation aims to keep UAVs open, Jul 2015. [Online] (link)
  • [65] Savioke, Your Robot Butler Has Arrived, August 2014. [Online]. (link)
  • [66] ABB, “ABB introduces Yumi, world´s first truly collaborative dual-arm robot”, 2015. Press release (link)
  • [67] LEE, Chang-Shing, et al. Human vs. Computer Go: Review and Prospect [Discussion Forum]. IEEE Computational Intelligence Magazine, 2016, vol. 11, no 3, pp. 67–72. (link)
  • [68] Bogue, R. (2015). Sensors for robotic perception. Part one: human interaction and intentions. Industrial Robot: An International Journal, 42(5), pp.386–391 (link)
  • [69] The Linux foundation, official wiki. 2009. [Online]. (link)
  • [70] The Tesla Team, “ All Tesla Cars Being Produced Now Have Full Self-Driving Hardware” Official web, 19 Oct. 2016. [Online]. (link)
  • [71] Brian Gerkey. Why ROS 2.0?, 2014. [Online]. (link)
  • [72] Acutronic Robotics, “H-ROS: Hardware Robot Operating System”, 2016. [Online]. (link)
  • [73] Judith Viladomat, TALOS:the next step in humanoid robots from PAL Robotics. 4 Oct. 2016 [Online]. (link)

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started March 27, 2017, 10:48:32 AM


Way before I attempt VR let me ask some questions. in General Project Discussion

Let's say with my soon to be new computer that I attempt to make my AI in a VR 3D world in a way that is smaller processing too.

Do I use Blender and python to control the body? Or do I open a compiler, wet my hands, and prepare to make some matrix out of code? I'm guessing that matrix would actually turn out to be Blender, meaning Blender is already a start for me to do this, correct?

I know C++, and Blender. But what will I need to know? Teach me a bit of important things here. Do I use code to control the bones movement etc for example, how, etc. Where does my AI algorithm go...so confused.

In fact tell me everything you would do, such as "then place cameras in the skull's eyesockets and code that in and..."

48 Comments | Started October 25, 2016, 10:44:43 PM


Choreographing automated cars could save time, money and lives in Robotics News

Choreographing automated cars could save time, money and lives
15 March 2017, 2:00 pm

If you take humans out of the driving seat, could traffic jams, accidents and high fuel bills become a thing of the past? As cars become more automated and connected, attention is turning to how to best choreograph the interaction between the tens or hundreds of automated vehicles that will one day share the same segment of Europe’s road network.

It is one of the most keenly studied fields in transport – how to make sure that automated cars get to their destinations safely and efficiently. But the prospect of having a multitude of vehicles taking decisions while interacting on Europe’s roads is leading researchers to design new traffic management systems suitable for an era of connected transport.

The idea is to ensure that traffic flows as smoothly and efficiently as possible, potentially avoiding the jams and delays caused by human behaviour.

‘Travelling distances and time gaps between vehicles are crucial,’ said Professor Markos Papageorgiou, head of the Dynamic Systems & Simulation Laboratory at the Technical University of Crete, Greece. ‘It is also important to consider things such as how vehicles decide which lane to drive in.’

Prof. Papageorgiou’s TRAMAN21 project, funded by the EU’s European Research Council, is studying ways to manage the behaviour of individual vehicles, as well as highway control systems.

For example, the researchers have been looking at how adaptive cruise control (ACC) could improve traffic flows. ACC is a ‘smart’ system that speeds up and slows down a car as necessary to keep up with the one in front. Highway control systems using ACC to adjust time gaps between cars could help to reduce congestion.

‘It may be possible to have a traffic control system that looks at the traffic situation and recommends or even orders ACC cars to adopt a shorter time gap from the car in front,’ Prof. Papageorgiou said.

‘So during a peak period, or if you are near a bottleneck, the system could work out a gap that helps you avoid the congestion and gives higher flow and higher capacity at the time and place where this is needed.’

Variable speed limits

TRAMAN21, which runs to 2018, has been running tests on a highway near Melbourne, Australia, and is currently using variable speed limits to actively intervene in traffic to improve flows.

An active traffic management system of this kind could even help when only relatively few vehicles on the highway have sophisticated automation. But he believes that self-driving vehicle systems must be robust enough to be able to communicate with each other even when there are no overall traffic control systems.

‘Schools of fish and flocks of birds do not have central controls, and the individuals base their movement on the information from their own senses and the behaviour of their neighbours,’ Prof. Papageorgiou said.

‘In theory this could also work in traffic flow, but there is a lot of work to be done if this is to be perfected. Nature has had a long head-start.’

One way of managing traffic flow is platooning – a way to schedule trucks to meet up and drive in convoy on the highway. Magnus Adolfson from Swedish truckmaker Scania AB, who coordinated the EU-funded COMPANION project, says that platooning – which has already been demonstrated on Europe’s roads – can also reduce fuel costs and accidents.

The three-year project tested different combinations of distances between trucks, speeds and unexpected disruptions or stoppages.

Fuel savings

In tests with three-vehicle platoons, researchers achieved fuel savings of 5 %. And by keeping radio contact with each other, the trucks can also reduce the risk of accidents.

‘About 90 percent of road accidents are caused by driver error, and this system, particularly by taking speed out of the driver’s control, can make it safer than driving with an actual driver,’ Adolfson said.

The COMPANION project also showed the benefits of close communication between vehicles to reduce the likelihood of braking too hard and causing traffic jams further back.

‘There is enough evidence to show that using such a system can have a noticeable impact, so it would be good to get it into production as soon as possible,’ Adolfson said. The researchers have extended their collaboration to working with the Swedish authorities on possible implementation.

Rutger Beekelaar, a project manager at Dutch-based research organisation TNO, says that researchers need to demonstrate how automated cars can work safely together in order to increase their popularity.

‘Collaboration is essential to ensure vehicles can work together,’ he said. ‘We believe that in the near future, there will be more and more automation in traffic, in cars and trucks. But automated driving is not widely accepted yet.’

To tackle this, Beekelaar led a group of researchers in the EU-funded i-GAME project, which developed technology that uses wireless communication that contributes to managing and controlling automated vehicles.

They demonstrated these systems in highway conditions in the 2016 Grand Cooperative Driving Challenge in Helmond, in the Netherlands, which put groups of real vehicles through their paces to demonstrate cooperation, how they safely negotiated an intersection crossing, and merged with another column of traffic.

Beekelaar says that their technology is now being used in other European research projects, but that researchers, auto manufacturers, policymakers, and road authorities still need to work together to develop protocols, systems and standardisation, along with extra efforts to address cyber security, ethics and particularly the issue of public acceptance.

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started March 27, 2017, 04:49:29 AM


Three years on: An update from Leka, Robot Launch winner in Robotics News

Three years on: An update from Leka, Robot Launch winner
15 March 2017, 10:00 am

Nearly three years ago, Leka won the Grand Prize at the 2014 Robot Launch competition for their robotic toy set on changing the way children with developmental disorders learn, play and progress. Leka will be the first interactive tool for children with developmental disorders that is available for direct purchase to the public. Designed for use in the home and not limited to a therapist’s office, Leka enables streamlined communication between therapists, parents and children easier, more efficient and more accessible through its monitoring platform. Leka’s co-founder and CEO, Ladislas de Toldi, writes about Leka’s progress since the Robot Launch competition and where the company is headed in the next year.

Since winning the Robot Launch competition in 2014, Leka has made immense progress and is well on it’s way to getting in the hands of exceptional children around the globe.

2016 was a big year for us; Leka was accepted into the 2016 class of the Sprint Accelerator Program, powered by Techstars, in Kansas City, MO. The whole team picked up and moved from Paris, France to the United States for a couple of months to work together as a team and create the best version of Leka possible.

Techstars was for us the opportunity to really test the US Special Education Market. We came to the program with two goals in mind: to build a strong community around our project in Kansas City and the area, and to launch our crowdfunding campaign on Indiegogo.

The program gave us an amazing support system to connect with people in the Autism community in the area and to push ourselves to build the best crowdfunding campaign targeting special education.

We’re incredibly humbled to say we succeeded in both: Kansas City is going to be our home base in the US, thanks to all the partnerships we now have with public schools and organizations.

Near the end of our accelerator program in May 2016, we launched our Indiegogo campaign to raise funds for Leka’s development and manufacturing—and ended up raising more than 150 percent of our total fundraising goal. We had buyers from all over the world including the United States, France, Israel, Australia and Uganda! As of today, we have reached +$145k on Indiegogo with more than 300 units preordered.

In July, the entire Leka team moved back to Paris to finalize the hardware development of Leka and kick off the manufacturing process. Although the journey has been full of challenges, we are thrilled with the progress we have made on Leka and the impact it can make on the lives of children.

This past fall, we partnered with Bourgogne Service Electronics (BSE) for manufacturing. BSE is a French company and we’re working extremely close with them on Leka’s design. Two of our team members, Alex and Gareth, recently worked with BSE to finalize the design and create what we consider to be Leka’s heart—an electronic card. The card allows Leka’s lights, movements and LCD screen to come to life.

We were also able to integrate proximity sensors into Leka, so that it can know where children are touching it, and lead to better analytics and progress monitoring in the future.

We have had quite a few exciting opportunities in the past year at industry events as well! We attended the Techstars alumni conference FounderCon, in Cincinnati, OH, and CES Unveiled in Paris in the Fall. We then had the opportunity to present Leka in front of some amazing industry professionals at the Wall Street Journal’s WSJ.D Live in Laguna Beach, CA. But most exciting was CES in Las Vegas this past January, and the announcements we made at the show.

At CES, we were finally able to unveil our newest industrial-grade prototypes with the autonomous features we’ve been working toward for the past three years. With Leka’s new fully integrated sensors, children can play with the robotic toy on their own, making it much more humanlike and interactive. These new features allow Leka to better help children understand social cues and improve their interpersonal skills.

At CES we also introduced Leka’s full motor integration, vibration and color capabilities, and the digital screen. Leka’s true emotions can finally show!

In the six months between our Indiegogo campaign, and CES Las Vegas, we were able to make immense improvements toward Leka, and pour our hearts into the product we believe will change lives for exception children and their families. We’re currently developing our next industrial prototype so we can make Leka even better, and we’re aiming to begin shipping in Fall 2017. We can’t wait to show you all the final product!

*All photos credit: Leka

About Leka

Leka is a robotic smart toy set on changing the way children with developmental disorders learn, play and progress. Available for direct purchase online through InDemand, Leka is an interactive tool designed to make communication between therapists, parents and children easier, more efficient and more accessible. Working with and adapting to each child’s own needs and abilities, Leka is able to provide vital feedback to parents and therapists on a child’s progress and growth.

Founded in France with more than two years in R&D, the company recently completed its tenure at the 2016 Sprint Accelerators Techstars program and is rapidly growing. Leka expects to begin shipping out units to Indiegogo backers in Fall 2017.

For more information, please visit http://leka.io.

If you liked this article, read more about Leka on Robohub here:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started March 26, 2017, 10:49:19 PM
Hyper chatbot

Hyper chatbot in Assistants

[Messenger] We have developed a chatbot that estimates the cost of app development after it is provided some details about the project.

Mar 07, 2017, 13:45:36 pm

WBot in Assistants

[Messenger] Utilizing an API from the Singapore National Environment Agency, this chatbot is able to provide

(1) Current weather information in different locations in Singapore

(2) 24-hour regional weather forecasts in Singapore

(3) Current regional Pollutant Standards Index information in Singapore

We are constantly improving the bot, and hope to provide weather information in all cities in the world soon.

Creator URL : http://singaporechatbots.sg

Mar 06, 2017, 13:52:19 pm

myBot in Tools

myBot is a free platform to create and share chatbots with artificial intelligence. It doesn't require programming skills. The platform does the AI magic for you just from your example questions and answers.

Mar 06, 2017, 13:44:37 pm
Kurna the Klingon

Kurna the Klingon in Chatbots - English

The first online chatbot in the Klingon language. Note that the chatbot only speaks Klingon!

Mar 06, 2017, 13:42:10 pm

PsychicMatic in Chatbots - English

PsychicMatic is a fortune teller chatbot pretending to be a psychic capable of predicting your future.

Mar 06, 2017, 13:40:43 pm
ChatBot App Development Explained

ChatBot App Development Explained in Articles

If earlier chatbots were small entertainment features for the people who are interested in new technologies, now it is almost the most important function for the modern messenger.Today, with the help of chatbot technology the company can improve its business.

The chatbots can optimize such tasks as data search, electronic assistant,creation of company image.

As for integration, chatbots can be integrated into most of today’s messengers:Telegrams, Facebook, Messenger, Viber and others. This allows the user to more efficiently manage his free time and more efficient use of the resources of his devices.

The chatbot development is at the peak of its popularity. And it’s not for nothing. If it’s high time to run for new challenges, the chatbot is one of them.

Feb 17, 2017, 18:03:08 pm

Erwin in Chatbots - English

May I introduce myself: Erwin is the name. I can send knotty riddles and clues to solve them. 👉 Type 'riddle' to get started!

FB Messenger - m.me/erwin.chat
Telegram - telegram.me/ErwinChatBot

Feb 17, 2017, 17:46:30 pm

MOTI AI in Chatbots - English

MOTI AI is a chatbot that uses conversation with users to determine a comprehensive motivational profile that helps them build better habits. It uses a background in behavioral science to determine how to best set up its users on the road to successfully starting and sticking to a new habit.

Feb 17, 2017, 17:37:57 pm
Star Wars: The Force Awakens

Star Wars: The Force Awakens in Robots in Movies

Approximately 30 years after the destruction of the second Death Star, the last remaining Jedi, Luke Skywalker, has disappeared. The First Order has risen from the fallen Galactic Empire and seeks to eliminate the New Republic. The Resistance, backed by the Republic and led by Luke's twin sister, General Leia Organa, opposes them while searching for Luke to enlist his aid.

Feb 03, 2017, 17:31:49 pm
A Personal Finance

A Personal Finance in Articles

One of the critical requirements for the adoption of AIML technology as a standardized human-computer interface is the creation of a core set of application programming interfaces. For ALICE to become the primary mode of user interaction, she must have interfaces to a collection of common applications such as email, database, address book, calculator, and spreadsheet programs. This report describes a first pass at the "spreadsheet" application for managing personal finances.

Aug 06, 2008, 18:00:25 pm