Version 5 (modified by tim, 6 years ago) (diff)

Clarified Next Milestones

Lab Course Robot Communication and Coordination - Winter term 2012/2013

The goal of this lab course is to design, develop and test methods to implement communication facilities for intelligent robot agents. The scenario is the RoboCup Logistics League Sponsored by Festo for a group of up to three cooperating robots. By the end of the semester a demonstration should show how the communication facilities can be used to coordinate the robots to cooperatively solve a specific task.



  • Anne Kathrein
  • Leonardo Antunes
  • Hendrik Pesch


  • Tim Niemueller
  • Stefan Schiffer

Protocol Design

Protocol Design - Phase 1: Kundschaften

In der ersten Phase werden alle Robotinos die Maschinen auskundschaften, bis alle M1 bis M13 festgelegt sind.

  1. Master festlegen
    • masterElectionMessage: ID des Masters
       * Jeder generiert eine Zufallszahl
       * Größte Zufallszahl legt den Master fest (falls selbe gewürfelt wiederhole)
  1. Master legt Ablauf zum Verteilen der S0s auf die Maschinen fest
 * Message 1.1 Auftrag zuweisen: Master -> alle (z.B. S0 muss nach M5)
 * Auftrag Annahme R_i -> alle
 * Auftrag Bestätigung Master -> alle (Wenn auftrag zurückgewiesen wurde kann R_i sich für neuen Auftrag bewerben)

  1. Aufträge werden im Worldstate gespeichert
  • Zustände: neu, in bearbeitung von R_i, fertig
  • Message: Worldstate update
* Machine type changed
* Machine state changed
  1. Aufwandsapproximation:
  • Robotino i erstellt normalisierten Vergleichswert (zB. Distanz , Travelpath) bei Annahme des Auftrages und sendet information an Master
  • Dieser Wert kann soll auch für die Auswahl des zu bestätigenden Auftrages verwendet werden
  • Master entscheidet basierend auf diesem score welcher Robotino den Auftrag erhält

A Particular Scenario involving two Robotinos

Situation: two Robotinos on the field, known to the robots are two machines of types T1 and T2, both are currently empty. Task: One Robotino is the primary Robotino that guides the task. Based on the decision to produce an S2 at the T2 machine, the sub-task to deliver an S0 to the T2 is assigned to the other, secondary, Robotino. In the meantime the primary robotino produces the S1 at the T1 machine. Pitfalls: Resource locking will be required for the input storage and machines, in particular T2. Otherwise the robots could disturb each other.

  1. Slave Agent
    1. Write simple agent which gets an S0 and takes it to the only known T2 on assertion of (start). Modify initial situation (facts.clp) so it knows a T2. This involves executing get_s0 and take_puck_to (like in the existing agent, but with fixed target).
    2. Command line tool to send a simple message, e.g. "get S0 and take to machine X".
    3. Add support for receiving such a message to the agent (C++) and assert fact like (bring-s0-to "M1") (to name a specific machine, make sure your initial situation has this machine as T2).
    4. Add another skill call to move away from the T2 machine after delivery (this requires additions from the clips-agent plugin in the base repository, which will be merged soonish)
  2. Master Agent
    • Message reception has already been implemented in 1.
    1. Add rule(s) to the existing LLSF agent to specifically handle the described situation.
    2. Add support for message sending to the agent
    3. In the specific situation, order the secondary Robotino to bring an S0 to the T2 machine
    4. Make sure primary will go on with producing a T1 and does not bring an S0 to the T2 itself (should already work, enable clips debugging and check the rule activation sequence).
  3. Locking
    • Consider Input Storage and machines to be shared resources
    • These must be reserved for a particular robot at a time
    • Extend to paths?
    • Add waiting queues to allow even more robots, or to drive a robot to a better position closer to the target already, even though it cannot yet enter the critical region?

Testing should happen after each stage, e.g. after each sub-part has been developed. This should be possible from how the tasks have been defined. Use the simulation mode of the agent. Real-world experiments will be done in February.