Vector人工智能机器人SDK使用笔记

Cozmo是2016年推出的,2两年后的2018年Vector上市,具备语音助手和更多功能,元件数由300+升级到700+。

Vector的SDK具体说明在:developer.anki.com/vector/docs/。目前是测试版本

Vector人工智能机器人SDK使用笔记


首先下载Vector的SDK(github):

docs是文档,examples是示例,还有一些说明文档和安装脚本等。

SDK支持Windows、Linux和MacOS,具体安装和使用流程参考官网。

由于是测试版本,示例还比较少,不如Cozmo丰富:

face_images放置可以在Vector面部显示的图案jpg、png,可以自定义。

apps是一些综合应用程序;tutorials是教程。

1. tutorials教程

教程并没有分章节,一共 13个:

 1.1 hello world

"""Hello World

Make Vector say 'Hello World' in this simple Vector SDK example program.
"""

import anki_vector


def main():
    args = anki_vector.util.parse_command_args()
    with anki_vector.Robot(args.serial) as robot:
        print("Say 'Hello World'...")
        robot.say_text("Hello World")


if __name__ == "__main__":
    main()

对比Cozmo的hello world

'''Hello World

Make Cozmo say 'Hello World' in this simple Cozmo SDK example program.
'''

import cozmo


def cozmo_program(robot: cozmo.robot.Robot):
    robot.say_text("Hello World").wait_for_completed()


cozmo.run_program(cozmo_program)

有些区别,需要注意哦~

1.2 drive square

"""Make Vector drive in a square.

Make Vector drive in a square by going forward and turning left 4 times in a row.
"""

import anki_vector
from anki_vector.util import degrees, distance_mm, speed_mmps


def main():
    args = anki_vector.util.parse_command_args()

    # The robot drives straight, stops and then turns around
    with anki_vector.Robot(args.serial) as robot:
        robot.behavior.drive_off_charger()

        # Use a "for loop" to repeat the indented code 4 times
        # Note: the _ variable name can be used when you don't need the value
        for _ in range(4):
            print("Drive Vector straight...")
            robot.behavior.drive_straight(distance_mm(200), speed_mmps(50))

            print("Turn Vector in place...")
            robot.behavior.turn_in_place(degrees(90))


if __name__ == "__main__":
    main()

1.3 motors

"""Drive Vector's wheels, lift and head motors directly

This is an example of how you can also have low-level control of Vector's motors
(wheels, lift and head) for fine-grained control and ease of controlling
multiple things at once.
"""

import time
import anki_vector


def main():
    args = anki_vector.util.parse_command_args()
    with anki_vector.Robot(args.serial) as robot:
        robot.behavior.drive_off_charger()

        # Tell the head motor to start lowering the head (at 5 radians per second)
        print("Lower Vector's head...")
        robot.motors.set_head_motor(-5.0)

        # Tell the lift motor to start lowering the lift (at 5 radians per second)
        print("Lower Vector's lift...")
        robot.motors.set_lift_motor(-5.0)

        # Tell Vector to drive the left wheel at 25 mmps (millimeters per second),
        # and the right wheel at 50 mmps (so Vector will drive Forwards while also
        # turning to the left
        print("Set Vector's wheel motors...")
        robot.motors.set_wheel_motors(25, 50)

        # wait for 3 seconds (the head, lift and wheels will move while we wait)
        time.sleep(3)

        # Tell the head motor to start raising the head (at 5 radians per second)
        print("Raise Vector's head...")
        robot.motors.set_head_motor(5)

        # Tell the lift motor to start raising the lift (at 5 radians per second)
        print("Raise Vector's lift...")
        robot.motors.set_lift_motor(5)

        # Tell Vector to drive the left wheel at 50 mmps (millimeters per second),
        # and the right wheel at -50 mmps (so Vector will turn in-place to the right)
        print("Set Vector's wheel motors...")
        robot.motors.set_wheel_motors(50, -50)

        # Wait for 3 seconds (the head, lift and wheels will move while we wait)
        time.sleep(3)

        # Stop the motors, which unlocks the tracks
        robot.motors.set_wheel_motors(0, 0)
        robot.motors.set_lift_motor(0)
        robot.motors.set_head_motor(0)


if __name__ == "__main__":
    main()

1.4 animation

"""Play an animation on Vector
"""

import anki_vector


def main():
    args = anki_vector.util.parse_command_args()
    with anki_vector.Robot(args.serial) as robot:
        robot.behavior.drive_off_charger()

        # Play an animation via its name.
        #
        # Warning: Future versions of the app might change these, so for future-proofing
        # we recommend using play_animation_trigger when it becomes available.
        #
        # See the remote_control.py example in apps for an easy way to see
        # the available animations.
        animation = 'anim_pounce_success_02'
        print("Playing animation by name: " + animation)
        robot.anim.play_animation(animation)


if __name__ == "__main__":
    main()

1.5 play behaviors

"""Tell Vector to drive on and off the charger.
"""

import anki_vector


def main():
    args = anki_vector.util.parse_command_args()

    with anki_vector.Robot(args.serial) as robot:
        print("Drive Vector onto charger...")
        robot.behavior.drive_on_charger()

        print("Drive Vector off of charger...")
        robot.behavior.drive_off_charger()


if __name__ == '__main__':
    main()

1.6 face image

png--jpg

import os
import sys
import time

try:
    from PIL import Image
except ImportError:
    sys.exit("Cannot import from PIL: Do `pip3 install --user Pillow` to install")

import anki_vector
from anki_vector.util import degrees


def main():
    args = anki_vector.util.parse_command_args()

    with anki_vector.Robot(args.serial) as robot:
        # If necessary, move Vector's Head and Lift to make it easy to see his face
        robot.behavior.set_head_angle(degrees(45.0))
        robot.behavior.set_lift_height(0.0)

        current_directory = os.path.dirname(os.path.realpath(__file__))
        image_path = os.path.join(current_directory, "..", "face_images", "cozmo_image.jpg")

        # Load an image
        image_file = Image.open(image_path)

        # Convert the image to the format used by the Screen
        print("Display image on Vector's face...")
        screen_data = anki_vector.screen.convert_image_to_screen_data(image_file)
        robot.screen.set_screen_with_image_data(screen_data, 4.0)
        time.sleep(5)


if __name__ == "__main__":
    main()

1.7 dock with cube

"""Tell Vector to drive up to a seen cube.

This example demonstrates Vector driving to and docking with a cube, without
picking it up.  Vector will line his arm hooks up with the cube so that they are
inserted into the cube's corners.

You must place a cube in front of Vector so that he can see it.
"""

import anki_vector
from anki_vector.util import degrees


def main():
    args = anki_vector.util.parse_command_args()

    docking_result = None
    with anki_vector.Robot(args.serial) as robot:
        robot.behavior.drive_off_charger()

        # If necessary, move Vector's Head and Lift down
        robot.behavior.set_head_angle(degrees(-5.0))
        robot.behavior.set_lift_height(0.0)

        print("Connecting to a cube...")
        robot.world.connect_cube()

        if robot.world.connected_light_cube:
            print("Begin cube docking...")
            dock_response = robot.behavior.dock_with_cube(
                robot.world.connected_light_cube,
                num_retries=3)
            if dock_response:
                docking_result = dock_response.result

            robot.world.disconnect_cube()

    if docking_result:
        if docking_result.code != anki_vector.messaging.protocol.ActionResult.ACTION_RESULT_SUCCESS:
            print("Cube docking failed with code {0} ({1})".format(str(docking_result).rstrip('\n\r'), docking_result.code))
    else:
        print("Cube docking failed.")


if __name__ == "__main__":
    main()

1.8 drive to cliff and back up

"""Make Vector drive to a cliff and back up.

Place the robot about a foot from a "cliff" (such as a tabletop edge),
then run this script.

This tutorial is an advanced example that shows the SDK's integration
with the Vector behavior system.

The Vector behavior system uses an order of prioritizations to determine
what the robot will do next. The highest priorities in the behavior
system including the following:
* When Vector reaches a cliff, he will back up to avoid falling.
* When Vector is low on battery, he will start searching for his charger
and self-dock.

When the SDK is running at a lower priority level than high priorities
like cliff and low battery, an SDK program can lose its ability to
control the robot when a cliff if reached or when battery is low.

This example shows how, after reaching a cliff, the SDK program can
re-request control so it can continue controlling the robot after
reaching the cliff.
"""

import anki_vector
from anki_vector.util import distance_mm, speed_mmps


def main():
    args = anki_vector.util.parse_command_args()

    with anki_vector.Robot(args.serial) as robot:
        print("Vector SDK has behavior control...")
        robot.behavior.drive_off_charger()

        print("Drive Vector straight until he reaches cliff...")
        # Once robot reaches cliff, he will play his typical cliff reactions.
        robot.behavior.drive_straight(distance_mm(5000), speed_mmps(100))

        robot.conn.run_coroutine(robot.conn.control_lost_event.wait()).result()

        print("Lost SDK behavior control. Request SDK behavior control again...")
        robot.conn.request_control()

        print("Drive Vector backward away from the cliff...")
        robot.behavior.drive_straight(distance_mm(-300), speed_mmps(100))


if __name__ == "__main__":
    main()

1.9 show photo

"""Show a photo taken by Vector.

Grabs the pictures off of Vector and open them via PIL.

Before running this script, please make sure you have successfully
had Vector take a photo by saying, "Hey Vector! Take a photo."
"""

import io
import sys

try:
    from PIL import Image
except ImportError:
    sys.exit("Cannot import from PIL: Do `pip3 install --user Pillow` to install")

import anki_vector


def main():
    args = anki_vector.util.parse_command_args()
    with anki_vector.Robot(args.serial) as robot:
        if len(robot.photos.photo_info) == 0:
            print('\n\nNo photos found on Vector. Ask him to take a photo first by saying, "Hey Vector! Take a photo."\n\n')
            return
        for photo in robot.photos.photo_info:
            print(f"Opening photo {photo.photo_id}")
            val = robot.photos.get_photo(photo.photo_id)
            image = Image.open(io.BytesIO(val.image))
            image.show()


if __name__ == "__main__":
    main()

1.10 eye color

"""Set Vector's eye color.
"""

import time
import anki_vector


def main():
    args = anki_vector.util.parse_command_args()

    with anki_vector.Robot(args.serial) as robot:
        print("Set Vector's eye color to purple...")
        robot.behavior.set_eye_color(hue=0.83, saturation=0.76)

        print("Sleep 5 seconds...")
        time.sleep(5)


if __name__ == '__main__':
    main()

1.11 face event subscription 

"""Wait for Vector to see a face, and then print output to the console.

This script demonstrates how to set up a listener for an event. It
subscribes to event 'robot_observed_face'. When that event is dispatched,
method 'on_robot_observed_face' is called, which prints text to the console.
Vector will also say "I see a face" one time, and the program will exit when
he finishes speaking.
"""

import functools
import threading

import anki_vector
from anki_vector.events import Events
from anki_vector.util import degrees

said_text = False


def main():
    evt = threading.Event()

    def on_robot_observed_face(robot, event_type, event):
        print("Vector sees a face")
        global said_text
        if not said_text:
            said_text = True
            robot.say_text("I see a face!")
            evt.set()

    args = anki_vector.util.parse_command_args()
    with anki_vector.Robot(args.serial, enable_face_detection=True) as robot:

        # If necessary, move Vector's Head and Lift to make it easy to see his face
        robot.behavior.set_head_angle(degrees(45.0))
        robot.behavior.set_lift_height(0.0)

        on_robot_observed_face = functools.partial(on_robot_observed_face, robot)
        robot.events.subscribe(on_robot_observed_face, Events.robot_observed_face)

        print("------ waiting for face events, press ctrl+c to exit early ------")

        try:
            if not evt.wait(timeout=5):
                print("------ Vector never saw your face! ------")
        except KeyboardInterrupt:
            pass

    robot.events.unsubscribe(on_robot_observed_face, Events.robot_observed_face)


if __name__ == '__main__':
    main()

1.12 wake word subscription 

"""Wait for Vector to hear "Hey Vector!" and then play an animation.

The wake_word event only is dispatched when the SDK program has
not requested behavior control. After the robot hears "Hey Vector!"
and the event is received, you can then request behavior control
and control the robot. See the 'requires_behavior_control' method in
connection.py for more information.
"""

import functools
import threading

import anki_vector
from anki_vector.events import Events

wake_word_heard = False


def main():
    evt = threading.Event()

    def on_wake_word(robot, event_type, event):
        robot.conn.request_control()

        global wake_word_heard
        if not wake_word_heard:
            wake_word_heard = True
            robot.say_text("Hello")
            evt.set()

    args = anki_vector.util.parse_command_args()
    with anki_vector.Robot(args.serial, requires_behavior_control=False, cache_animation_list=False) as robot:
        on_wake_word = functools.partial(on_wake_word, robot)
        robot.events.subscribe(on_wake_word, Events.wake_word)

        print('------ Vector is waiting to hear "Hey Vector!" Press ctrl+c to exit early ------')

        try:
            if not evt.wait(timeout=10):
                print('------ Vector never heard "Hey Vector!" ------')
        except KeyboardInterrupt:
            pass


if __name__ == '__main__':
    main()

1.13 custom objects

"""This example demonstrates how you can define custom objects.

The example defines several custom objects (2 cubes, a wall and a box). When
Vector sees the markers for those objects he will report that he observed an
object of that size and shape there.

You can adjust the markers, marker sizes, and object sizes to fit whatever
object you have and the exact size of the markers that you print out.
"""

import time

import anki_vector
from anki_vector.objects import CustomObjectMarkers, CustomObjectTypes


def handle_object_appeared(event_type, event):
    # This will be called whenever an EvtObjectAppeared is dispatched -
    # whenever an Object comes into view.
    print(f"--------- Vector started seeing an object --------- \n{event.obj}")


def handle_object_disappeared(event_type, event):
    # This will be called whenever an EvtObjectDisappeared is dispatched -
    # whenever an Object goes out of view.
    print(f"--------- Vector stopped seeing an object --------- \n{event.obj}")


def main():
    args = anki_vector.util.parse_command_args()
    with anki_vector.Robot(args.serial,
                           default_logging=False,
                           show_viewer=True,
                           show_3d_viewer=True,
                           enable_camera_feed=True,
                           enable_custom_object_detection=True,
                           enable_nav_map_feed=True) as robot:
        # Add event handlers for whenever Vector sees a new object
        robot.events.subscribe(handle_object_appeared, anki_vector.events.Events.object_appeared)
        robot.events.subscribe(handle_object_disappeared, anki_vector.events.Events.object_disappeared)

        # define a unique cube (44mm x 44mm x 44mm) (approximately the same size as Vector's light cube)
        # with a 50mm x 50mm Circles2 image on every face. Note that marker_width_mm and marker_height_mm
        # parameter values must match the dimensions of the printed marker.
        cube_obj = robot.world.define_custom_cube(custom_object_type=CustomObjectTypes.CustomType00,
                                                  marker=CustomObjectMarkers.Circles2,
                                                  size_mm=44.0,
                                                  marker_width_mm=50.0,
                                                  marker_height_mm=50.0,
                                                  is_unique=True)

        # define a unique cube (88mm x 88mm x 88mm) (approximately 2x the size of Vector's light cube)
        # with a 50mm x 50mm Circles3 image on every face.
        big_cube_obj = robot.world.define_custom_cube(custom_object_type=CustomObjectTypes.CustomType01,
                                                      marker=CustomObjectMarkers.Circles3,
                                                      size_mm=88.0,
                                                      marker_width_mm=50.0,
                                                      marker_height_mm=50.0,
                                                      is_unique=True)

        # define a unique wall (150mm x 120mm (x10mm thick for all walls)
        # with a 50mm x 30mm Triangles2 image on front and back
        wall_obj = robot.world.define_custom_wall(custom_object_type=CustomObjectTypes.CustomType02,
                                                  marker=CustomObjectMarkers.Triangles2,
                                                  width_mm=150,
                                                  height_mm=120,
                                                  marker_width_mm=50,
                                                  marker_height_mm=30,
                                                  is_unique=True)

        # define a unique box (20mm deep x 20mm width x20mm tall)
        # with a different 50mm x 50mm image on each of the 6 faces
        box_obj = robot.world.define_custom_box(custom_object_type=CustomObjectTypes.CustomType03,
                                                marker_front=CustomObjectMarkers.Diamonds2,   # front
                                                marker_back=CustomObjectMarkers.Hexagons2,    # back
                                                marker_top=CustomObjectMarkers.Hexagons3,     # top
                                                marker_bottom=CustomObjectMarkers.Hexagons4,  # bottom
                                                marker_left=CustomObjectMarkers.Triangles3,   # left
                                                marker_right=CustomObjectMarkers.Triangles4,  # right
                                                depth_mm=20.0,
                                                width_mm=20.0,
                                                height_mm=20.0,
                                                marker_width_mm=50.0,
                                                marker_height_mm=50.0,
                                                is_unique=True)

        if ((cube_obj is not None) and (big_cube_obj is not None) and
                (wall_obj is not None) and (box_obj is not None)):
            print("All objects defined successfully!")
        else:
            print("One or more object definitions failed!")
            return

        print("\n\nShow a marker specified in the Python script to Vector and you will see the related 3d objects\n"
              "display in Vector's 3d_viewer window. You will also see messages print every time a custom object\n"
              "enters or exits Vector's view. Markers can be found from the docs under CustomObjectMarkers.\n\n")

        try:
            while True:
                time.sleep(0.5)
        except KeyboardInterrupt:
            pass


if __name__ == "__main__":
    main()

 

2. apps

分为四个部分,每个部分只有一个示例:

2.1 3d viewer

"""3d Viewer example, with remote control.

This is an example of how you can use the 3D viewer with a program, and the
3D Viewer and controls will work automatically.
"""

import time

import anki_vector


def main():
    args = anki_vector.util.parse_command_args()
    with anki_vector.Robot(args.serial,
                           show_viewer=True,
                           enable_camera_feed=True,
                           show_3d_viewer=True,
                           enable_face_detection=True,
                           enable_custom_object_detection=True,
                           enable_nav_map_feed=True):
        print("Starting 3D Viewer. Use Ctrl+C to quit.")
        try:
            while True:
                time.sleep(0.5)
        except KeyboardInterrupt:
            pass


if __name__ == "__main__":
    main()

2.2 interactive shell

"""Command Line Interface for Vector

This is an example of integrating Vector with an ipython-based command line interface.
"""

import sys

try:
    from IPython.terminal.embed import InteractiveShellEmbed
except ImportError:
    sys.exit('Cannot import from ipython: Do `pip3 install ipython` to install')

import anki_vector

usage = """Use the [tab] key to auto-complete commands, and see all available methods and properties.

For example, type 'robot.' then press the [tab] key and you'll see all the robot functions.
Keep pressing tab to cycle through all of the available options.

All IPython commands work as usual.
Here's some useful syntax:
  robot?   -> Details about 'robot'.
  robot??  -> More detailed information including code for 'robot'.
These commands will work on all objects inside of the shell.

You can even call the functions that send messages to Vector, and he'll respond just like he would in a script.
Try it out! Type:
    robot.anim.play_animation('anim_pounce_success_02')
"""

args = anki_vector.util.parse_command_args()

ipyshell = InteractiveShellEmbed(banner1='\nWelcome to the Vector Interactive Shell!',
                                 exit_msg='Goodbye\n')

if __name__ == "__main__":
    with anki_vector.Robot(args.serial,
                           enable_camera_feed=True,
                           show_viewer=True) as robot:
        # Invoke the ipython shell while connected to Vector
        ipyshell(usage)

2.3 proximity mapper

"""Maps a region around Vector using the proximity sensor.

Vector will turn in place and use his sensor to detect walls in his
local environment.  These walls are displayed in the 3D Viewer.  The
visualizer does not effect the robot's internal state or behavior.

Vector expects this environment to be static - if objects are moved
he will have no knowledge of them.
"""

import asyncio
import concurrent
from math import cos, sin, inf, acos
import os
import sys

sys.path.append(os.path.join(os.path.dirname(__file__), 'lib'))
from proximity_mapper_state import ClearedTerritory, MapState, Wall, WallSegment   # pylint: disable=wrong-import-position

import anki_vector   # pylint: disable=wrong-import-position
from anki_vector.util import parse_command_args, radians, degrees, distance_mm, speed_mmps, Vector3  # pylint: disable=wrong-import-position

# Constants

#: The maximum distance (in millimeters) the scan considers valid for a proximity respons.
#: Wall detection past this threshold will be disregarded, and an 'open' node will
#: be created at this distance instead.  Increasing this value may degrade the
#: reliability of this program, see note below:
#:
#: NOTE: The proximity sensor works by sending a light pulse, and seeing how long that pulse takes
#: to reflect and return to the sensor.  The proximity sensor does not specifically have a maximum
#: range, but will get unreliable results below a certain return signal strength.  This return signal
#: is impacted by environmental conditions (such as the orientation and material of the detected obstacle)
#: as well as the distance.  Additionally, increasing this radius will reduce the resolution of contact
#: points, necessitating changes to PROXIMITY_SCAN_SAMPLE_FREQUENCY_HZ and PROXIMITY_SCAN_BIND_THRESHOLD_MM
#: to maintain effective wall prediction.
PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM = 300

#: The distance (in millimeters) to place an open node if no proximity results are detected along
#: a given line of sight.  This should be smaller than the distance threshold, since these nodes
#: indicate safe points for the robot to drive to, and the robot's size should be taken into account
#: when estimating a maximum safe driving distance
PROXIMITY_SCAN_OPEN_NODE_DISTANCE_MM = 230

#: How frequently (in hertz) the robot checks proximity data while doing a scan.
PROXIMITY_SCAN_SAMPLE_FREQUENCY_HZ = 15.0

#: How long (in seconds) the robot spends doing it's 360 degree scan.
PROXIMITY_SCAN_TURN_DURATION_S = 10.0

#: How close (in millimeters) together two detected contact points need to be for the robot to
#: consider them part of a continuous wall.
PROXIMITY_SCAN_BIND_THRESHOLD_MM = 30.0

#: A delay (in seconds) the program waits after the scan finishes before shutting down.
#: This allows the user time to explore the mapped 3d environment in the viewer and can be
#: Tuned to any desired length.  A value of 0.0 will prevent the viewer from closing.
PROXIMITY_EXPLORATION_SHUTDOWN_DELAY_S = 8.0


# @TODO enable when testing shows it is ready to go
#: ACTIVELY_EXPLORE_SPACE can be activated to allow the robot to move
#: into an open space after scanning, and continue the process until all open
#: spaces are explored.
ACTIVELY_EXPLORE_SPACE = True
#: The speed (in millimeters/second) the robot drives while exploring.
EXPLORE_DRIVE_SPEED_MMPS = 40.0
#: The speed (in degrees/second) the robot turns while exploring.
EXPLORE_TURN_SPEED_DPS = 90.0


#: Takes a position in 3d space where a collection was detected, and adds it to the map state
#: by either creating a wall, adding to wall or storing a loose contact point.
async def add_proximity_contact_to_state(node_position: Vector3, state: MapState):

    # Comparison function for sorting points by distance.
    def compare_distance(elem):
        return (elem - node_position).magnitude_squared

    # Comparison function for sorting walls by distance using their head as a reference point.
    def compare_head_distance(elem):
        return (elem.vertices[0] - node_position).magnitude_squared

    # Comparison function for sorting walls by distance using their tail as a reference point.
    def compare_tail_distance(elem):
        return (elem.vertices[-1] - node_position).magnitude_squared

    # Sort all the loose contact nodes not yet incorporated into walls by
    # their distance to our reading position.  If the nearest one is within
    # our binding threshold - store it as a viable wall creation partner.
    # (infinity is used as a standin for 'nothing')
    closest_contact_distance = inf
    if state.contact_nodes:
        state.contact_nodes.sort(key=compare_distance)
        closest_contact_distance = (state.contact_nodes[0] - node_position).magnitude
        if closest_contact_distance > PROXIMITY_SCAN_BIND_THRESHOLD_MM:
            closest_contact_distance = inf

    # Sort all the walls both by head and tail distance from our sample
    # if either of the results are within our binding threshold, store them
    # as potential wall extension candidates for our sample.
    # (infinity is used as a standin for 'nothing')
    closest_head_distance = inf
    closest_tail_distance = inf
    if state.walls:
        state.walls.sort(key=compare_tail_distance)
        closest_tail_distance = (state.walls[0].vertices[-1] - node_position).magnitude
        if closest_tail_distance > PROXIMITY_SCAN_BIND_THRESHOLD_MM:
            closest_tail_distance = inf

        state.walls.sort(key=compare_head_distance)
        closest_head_distance = (state.walls[0].vertices[0] - node_position).magnitude
        if closest_head_distance > PROXIMITY_SCAN_BIND_THRESHOLD_MM:
            closest_head_distance = inf

    # Create a new wall if a loose contact node is in bind range and
    # is closer than any existing wall.  The contact node will be removed.
    if closest_contact_distance <= PROXIMITY_SCAN_BIND_THRESHOLD_MM and closest_contact_distance < closest_head_distance and closest_contact_distance < closest_tail_distance:
        state.walls.append(Wall(WallSegment(state.contact_nodes[0], node_position)))
        state.contact_nodes.pop(0)

    # Extend a wall if it's head is within bind range and is closer than
    # any loose contacts or wall tails.
    elif closest_head_distance <= PROXIMITY_SCAN_BIND_THRESHOLD_MM and closest_head_distance < closest_contact_distance and closest_head_distance < closest_tail_distance:
        state.walls[0].insert_head(node_position)

    # Extend a wall if it's tail is within bind range and is closer than
    # any loose contacts or wall heads.
    elif closest_tail_distance <= PROXIMITY_SCAN_BIND_THRESHOLD_MM and closest_tail_distance < closest_contact_distance and closest_tail_distance < closest_head_distance:
        state.walls.sort(key=compare_tail_distance)
        state.walls[0].insert_tail(node_position)

    # If nothing was found to bind with, store the sample as a loose contact node.
    else:
        state.contact_nodes.append(node_position)


#: Takes a position in 3d space and adds it to the map state as an open node
async def add_proximity_non_contact_to_state(node_position: Vector3, state: MapState):
    # Check to see if the uncontacted sample is inside of any area considered already explored.
    is_open_unexplored = True
    for ct in state.cleared_territories:
        if (node_position - ct.center).magnitude < ct.radius:
            is_open_unexplored = False

    # If the uncontacted sample is in unfamiliar ground, store it as an open node.
    if is_open_unexplored:
        state.open_nodes.append(node_position)


#: Modifies the map state with the details of a proximity reading
async def analyze_proximity_sample(reading: anki_vector.proximity.ProximitySensorData, robot: anki_vector.robot.Robot, state: MapState):
    # Check if the reading meets the engine's metrics for valid, and that its within our specified distance threshold.
    reading_contacted = reading.is_valid and reading.distance.distance_mm < PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM

    if reading_contacted:
        # The distance will either be the reading data, or our threshold distance if the reading is considered uncontacted.
        reading_distance = reading.distance.distance_mm if reading_contacted else PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM

        # Convert the distance to a 3d position in worldspace.
        reading_position = Vector3(
            robot.pose.position.x + cos(robot.pose_angle_rad) * reading_distance,
            robot.pose.position.y + sin(robot.pose_angle_rad) * reading_distance,
            robot.pose.position.z)

        await add_proximity_contact_to_state(reading_position, state)
    else:
        # Convert the distance to a 3d position in worldspace.
        safe_driving_position = Vector3(
            robot.pose.position.x + cos(robot.pose_angle_rad) * PROXIMITY_SCAN_OPEN_NODE_DISTANCE_MM,
            robot.pose.position.y + sin(robot.pose_angle_rad) * PROXIMITY_SCAN_OPEN_NODE_DISTANCE_MM,
            robot.pose.position.z)

        await add_proximity_non_contact_to_state(safe_driving_position, state)


#: repeatedly collects proximity data sample and converts them to nodes and walls for the map state
async def collect_proximity_data_loop(robot: anki_vector.robot.Robot, future: concurrent.futures.Future, state: MapState):
    try:
        scan_interval = 1.0 / PROXIMITY_SCAN_SAMPLE_FREQUENCY_HZ

        # Runs until the collection_active flag is cleared.
        # This flag is cleared external to this function.
        while state.collection_active:
            # Collect proximity data from the sensor.
            reading = robot.proximity.last_sensor_reading
            if reading is not None:
                await analyze_proximity_sample(reading, robot, state)
            robot.viewer_3d.user_data_queue.put(state)
            await asyncio.sleep(scan_interval)

    # Exceptions raised in this process are ignored, unless we set them on the future, and then run future.result() at a later time
    except Exception as e:    # pylint: disable=broad-except
        future.set_exception(e)
    finally:
        future.set_result(state)


#: Updates the map state by rotating 360 degrees and collecting/applying proximity data samples.
async def scan_area(robot: anki_vector.robot.Robot, state: MapState):
    collect_future = concurrent.futures.Future()

    # The collect_proximity_data task relies on this external trigger to know when its finished.
    state.collection_active = True

    # Activate the collection task while the robot turns in place.
    collect_task = robot.conn.loop.create_task(collect_proximity_data_loop(robot, collect_future, state))

    # Turn around in place, then send the signal to kill the collection task.
    robot.behavior.turn_in_place(angle=degrees(360.0), speed=degrees(360.0 / PROXIMITY_SCAN_TURN_DURATION_S))
    state.collection_active = False

    # Wait for the collection task to finish.
    robot.conn.run_coroutine(collect_task)
    # While the result of the task is not used, this call will propagate any exceptions that
    # occured in the task, allowing for debug visibility.
    collect_future.result()


#: Top level call to perform exploration and environment mapping
async def map_explorer(robot: anki_vector.robot.Robot):
    # Drop the lift, so that it does not block the proximity sensor
    robot.behavior.set_lift_height(0.0)

    # Create the map state, and add it's rendering function to the viewer's render pipeline
    state = MapState()
    robot.viewer_3d.add_render_call(state.render)

    # Comparison function used for sorting which open nodes are the furthest from all existing
    # walls and loose contacts.
    # (Using 1/r^2 to respond strongly to small numbers of close contact and weaking to many distant contacts)
    def open_point_sort_func(position: Vector3):
        proximity_sum = 0
        for p in state.contact_nodes:
            proximity_sum = proximity_sum + 1 / (p - position).magnitude_squared
        for c in state.walls:
            for p in c.vertices:
                proximity_sum = proximity_sum + 1 / (p - position).magnitude_squared
        return proximity_sum

    # Loop until running out of open samples to navigate to,
    # or if the process has yet to start (indicated by a lack of cleared_territories).
    while (state.open_nodes and ACTIVELY_EXPLORE_SPACE) or not state.cleared_territories:
        if robot.pose:
            # Delete any open samples range of the robot.
            state.open_nodes = [position for position in state.open_nodes if (position - robot.pose.position).magnitude > PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM]

            # Collect map data for the robot's current location.
            await scan_area(robot, state)

            # Add where the robot is to the map's cleared territories.
            state.cleared_territories.append(ClearedTerritory(robot.pose.position, PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM))

            # @TODO: This whole block should ideally be replaced with the go_to_pose actions when that is ready to go.
            # Alternatively, the turn&drive commands can be modified to respond to collisions by cancelling.  After
            # either change, ACTIVELY_EXPLORE_SPACE should be defaulted True
            if ACTIVELY_EXPLORE_SPACE and state.open_nodes:
                # Sort the open nodes and find our next navigation point.
                state.open_nodes.sort(key=open_point_sort_func)
                nav_point = state.open_nodes[0]

                # Calculate the distance and direction of this next navigation point.
                nav_point_delta = Vector3(
                    nav_point.x - robot.pose.position.x,
                    nav_point.y - robot.pose.position.y,
                    0)
                nav_distance = nav_point_delta.magnitude
                nav_direction = nav_point_delta.normalized

                # Convert the nav_direction into a turn angle relative to the robot's current facing.
                robot_forward = Vector3(*robot.pose.rotation.to_matrix().forward_xyz).normalized
                turn_angle = acos(nav_direction.dot(robot_forward))
                if nav_direction.cross(robot_forward).z > 0:
                    turn_angle = -turn_angle

                # Turn toward the nav point, and drive to it.
                robot.behavior.turn_in_place(angle=radians(turn_angle), speed=degrees(EXPLORE_TURN_SPEED_DPS))
                robot.behavior.drive_straight(distance=distance_mm(nav_distance), speed=speed_mmps(EXPLORE_DRIVE_SPEED_MMPS))

    if PROXIMITY_EXPLORATION_SHUTDOWN_DELAY_S == 0.0:
        while True:
            await asyncio.sleep(1.0)
    else:
        print('finished exploring - waiting an additional {0} seconds, then shutting down'.format(PROXIMITY_EXPLORATION_SHUTDOWN_DELAY_S))
        await asyncio.sleep(PROXIMITY_EXPLORATION_SHUTDOWN_DELAY_S)


if __name__ == '__main__':
    # Connect to the robot
    args = parse_command_args()
    with anki_vector.Robot(args.serial,
                           enable_camera_feed=True,
                           show_viewer=True,
                           enable_nav_map_feed=False,
                           show_3d_viewer=True) as robotInstance:
        robotInstance.behavior.drive_off_charger()
        loop = asyncio.get_event_loop()
        loop.run_until_complete(map_explorer(robotInstance))

2.4 remote control

"""Control Vector using a webpage on your computer.

This example lets you control Vector by Remote Control, using a webpage served by Flask.
"""

import io
import json
import sys
import time
from lib import flask_helpers

import anki_vector
from anki_vector import util


try:
    from flask import Flask, request
except ImportError:
    sys.exit("Cannot import from flask: Do `pip3 install --user flask` to install")

try:
    from PIL import Image
except ImportError:
    sys.exit("Cannot import from PIL: Do `pip3 install --user Pillow` to install")


def create_default_image(image_width, image_height, do_gradient=False):
    """Create a place-holder PIL image to use until we have a live feed from Vector"""
    image_bytes = bytearray([0x70, 0x70, 0x70]) * image_width * image_height

    if do_gradient:
        i = 0
        for y in range(image_height):
            for x in range(image_width):
                image_bytes[i] = int(255.0 * (x / image_width))   # R
                image_bytes[i + 1] = int(255.0 * (y / image_height))  # G
                image_bytes[i + 2] = 0                                # B
                i += 3

    image = Image.frombytes('RGB', (image_width, image_height), bytes(image_bytes))
    return image


flask_app = Flask(__name__)
_default_camera_image = create_default_image(320, 240)
_is_mouse_look_enabled_by_default = False


def remap_to_range(x, x_min, x_max, out_min, out_max):
    """convert x (in x_min..x_max range) to out_min..out_max range"""
    if x < x_min:
        return out_min
    if x > x_max:
        return out_max
    ratio = (x - x_min) / (x_max - x_min)
    return out_min + ratio * (out_max - out_min)


class RemoteControlVector:

    def __init__(self, robot):
        self.vector = robot

        self.drive_forwards = 0
        self.drive_back = 0
        self.turn_left = 0
        self.turn_right = 0
        self.lift_up = 0
        self.lift_down = 0
        self.head_up = 0
        self.head_down = 0

        self.go_fast = 0
        self.go_slow = 0

        self.is_mouse_look_enabled = _is_mouse_look_enabled_by_default
        self.mouse_dir = 0

        all_anim_names = self.vector.anim.anim_list
        all_anim_names.sort()
        self.anim_names = []

        # Hide a few specific test animations that don't behave well
        bad_anim_names = [
            "ANIMATION_TEST",
            "soundTestAnim"]

        for anim_name in all_anim_names:
            if anim_name not in bad_anim_names:
                self.anim_names.append(anim_name)

        default_anims_for_keys = ["anim_turn_left_01",  # 0
                                  "anim_blackjack_victorwin_01",  # 1
                                  "anim_pounce_success_02",  # 2
                                  "anim_feedback_shutup_01",  # 3
                                  "anim_knowledgegraph_success_01",  # 4
                                  "anim_wakeword_groggyeyes_listenloop_01",  # 5
                                  "anim_fistbump_success_01",  # 6
                                  "anim_reacttoface_unidentified_01",  # 7
                                  "anim_rtpickup_loop_10",  # 8
                                  "anim_volume_stage_05"]  # 9

        self.anim_index_for_key = [0] * 10
        kI = 0
        for default_key in default_anims_for_keys:
            try:
                anim_idx = self.anim_names.index(default_key)
            except ValueError:
                print("Error: default_anim %s is not in the list of animations" % default_key)
                anim_idx = kI
            self.anim_index_for_key[kI] = anim_idx
            kI += 1

        self.action_queue = []
        self.text_to_say = "Hi I'm Vector"

    def set_anim(self, key_index, anim_index):
        self.anim_index_for_key[key_index] = anim_index

    def handle_mouse(self, mouse_x, mouse_y):
        """Called whenever mouse moves
            mouse_x, mouse_y are in in 0..1 range (0,0 = top left, 1,1 = bottom right of window)
        """
        if self.is_mouse_look_enabled:
            mouse_sensitivity = 1.5  # higher = more twitchy
            self.mouse_dir = remap_to_range(mouse_x, 0.0, 1.0, -mouse_sensitivity, mouse_sensitivity)
            self.update_mouse_driving()

            desired_head_angle = remap_to_range(mouse_y, 0.0, 1.0, 45, -25)
            head_angle_delta = desired_head_angle - util.radians(self.vector.head_angle_rad).degrees
            head_vel = head_angle_delta * 0.03
            self.vector.motors.set_head_motor(head_vel)

    def set_mouse_look_enabled(self, is_mouse_look_enabled):
        was_mouse_look_enabled = self.is_mouse_look_enabled
        self.is_mouse_look_enabled = is_mouse_look_enabled
        if not is_mouse_look_enabled:
            # cancel any current mouse-look turning
            self.mouse_dir = 0
            if was_mouse_look_enabled:
                self.update_mouse_driving()
                self.update_head()

    def update_drive_state(self, key_code, is_key_down, speed_changed):
        """Update state of driving intent from keyboard, and if anything changed then call update_driving"""
        update_driving = True
        if key_code == ord('W'):
            self.drive_forwards = is_key_down
        elif key_code == ord('S'):
            self.drive_back = is_key_down
        elif key_code == ord('A'):
            self.turn_left = is_key_down
        elif key_code == ord('D'):
            self.turn_right = is_key_down
        else:
            if not speed_changed:
                update_driving = False
        return update_driving

    def update_lift_state(self, key_code, is_key_down, speed_changed):
        """Update state of lift move intent from keyboard, and if anything changed then call update_lift"""
        update_lift = True
        if key_code == ord('R'):
            self.lift_up = is_key_down
        elif key_code == ord('F'):
            self.lift_down = is_key_down
        else:
            if not speed_changed:
                update_lift = False
        return update_lift

    def update_head_state(self, key_code, is_key_down, speed_changed):
        """Update state of head move intent from keyboard, and if anything changed then call update_head"""
        update_head = True
        if key_code == ord('T'):
            self.head_up = is_key_down
        elif key_code == ord('G'):
            self.head_down = is_key_down
        else:
            if not speed_changed:
                update_head = False
        return update_head

    def handle_key(self, key_code, is_shift_down, is_alt_down, is_key_down):
        """Called on any key press or release
           Holding a key down may result in repeated handle_key calls with is_key_down==True
        """

        # Update desired speed / fidelity of actions based on shift/alt being held
        was_go_fast = self.go_fast
        was_go_slow = self.go_slow

        self.go_fast = is_shift_down
        self.go_slow = is_alt_down

        speed_changed = (was_go_fast != self.go_fast) or (was_go_slow != self.go_slow)

        update_driving = self.update_drive_state(key_code, is_key_down, speed_changed)

        update_lift = self.update_lift_state(key_code, is_key_down, speed_changed)

        update_head = self.update_head_state(key_code, is_key_down, speed_changed)

        # Update driving, head and lift as appropriate
        if update_driving:
            self.update_mouse_driving()
        if update_head:
            self.update_head()
        if update_lift:
            self.update_lift()

        # Handle any keys being released (e.g. the end of a key-click)
        if not is_key_down:
            if ord('9') >= key_code >= ord('0'):
                anim_name = self.key_code_to_anim_name(key_code)
                self.queue_action((self.vector.anim.play_animation, anim_name))
            elif key_code == ord(' '):
                self.queue_action((self.vector.say_text, self.text_to_say))

    def key_code_to_anim_name(self, key_code):
        key_num = key_code - ord('0')
        anim_num = self.anim_index_for_key[key_num]
        anim_name = self.anim_names[anim_num]
        return anim_name

    def func_to_name(self, func):
        if func == self.vector.say_text:
            return "say_text"
        if func == self.vector.anim.play_animation:
            return "play_anim"
        return "UNKNOWN"

    def action_to_text(self, action):
        func, args = action
        return self.func_to_name(func) + "( " + str(args) + " )"

    def action_queue_to_text(self, action_queue):
        out_text = ""
        i = 0
        for action in action_queue:
            out_text += "[" + str(i) + "] " + self.action_to_text(action)
            i += 1
        return out_text

    def queue_action(self, new_action):
        if len(self.action_queue) > 10:
            self.action_queue.pop(0)
        self.action_queue.append(new_action)

    def update(self):
        """Try and execute the next queued action"""
        if self.action_queue:
            queued_action, action_args = self.action_queue[0]
            if queued_action(action_args):
                self.action_queue.pop(0)

    def pick_speed(self, fast_speed, mid_speed, slow_speed):
        if self.go_fast:
            if not self.go_slow:
                return fast_speed
        elif self.go_slow:
            return slow_speed
        return mid_speed

    def update_lift(self):
        lift_speed = self.pick_speed(8, 4, 2)
        lift_vel = (self.lift_up - self.lift_down) * lift_speed
        self.vector.motors.set_lift_motor(lift_vel)

    def update_head(self):
        if not self.is_mouse_look_enabled:
            head_speed = self.pick_speed(2, 1, 0.5)
            head_vel = (self.head_up - self.head_down) * head_speed
            self.vector.motors.set_head_motor(head_vel)

    def update_mouse_driving(self):
        drive_dir = (self.drive_forwards - self.drive_back)

        turn_dir = (self.turn_right - self.turn_left) + self.mouse_dir
        if drive_dir < 0:
            # It feels more natural to turn the opposite way when reversing
            turn_dir = -turn_dir

        forward_speed = self.pick_speed(150, 75, 50)
        turn_speed = self.pick_speed(100, 50, 30)

        l_wheel_speed = (drive_dir * forward_speed) + (turn_speed * turn_dir)
        r_wheel_speed = (drive_dir * forward_speed) - (turn_speed * turn_dir)

        self.vector.motors.set_wheel_motors(l_wheel_speed, r_wheel_speed, l_wheel_speed * 4, r_wheel_speed * 4)


def get_anim_sel_drop_down(selectorIndex):
    html_text = """<select onchange="handleDropDownSelect(this)" name="animSelector""" + str(selectorIndex) + """">"""
    i = 0
    for anim_name in flask_app.remote_control_vector.anim_names:
        is_selected_item = (i == flask_app.remote_control_vector.anim_index_for_key[selectorIndex])
        selected_text = ''' selected="selected"''' if is_selected_item else ""
        html_text += """<option value=""" + str(i) + selected_text + """>""" + anim_name + """</option>"""
        i += 1
    html_text += """</select>"""
    return html_text


def get_anim_sel_drop_downs():
    html_text = ""
    for i in range(10):
        # list keys 1..9,0 as that's the layout on the keyboard
        key = i + 1 if (i < 9) else 0
        html_text += str(key) + """: """ + get_anim_sel_drop_down(key) + """<br>"""
    return html_text


def to_js_bool_string(bool_value):
    return "true" if bool_value else "false"


@flask_app.route("/")
def handle_index_page():
    return """
    <html>
        <head>
            <title>remote_control_vector.py display</title>
        </head>
        <body>
            <h1>Remote Control Vector</h1>
            <table>
                <tr>
                    <td valign = top>
                        <div id="vectorImageMicrosoftWarning" style="display: none;color: #ff9900; text-align: center;">Video feed performance is better in Chrome or Firefox due to mjpeg limitations in this browser</div>
                        <img src="vectorImage" id="vectorImageId" width=640 height=480>
                        <div id="DebugInfoId"></div>
                    </td>
                    <td width=30></td>
                    <td valign=top>
                        <h2>Controls:</h2>

                        <h3>Driving:</h3>

                        <b>W A S D</b> : Drive Forwards / Left / Back / Right<br><br>
                        <b>Q</b> : Toggle Mouse Look: <button id="mouseLookId" onClick=onMouseLookButtonClicked(this) style="font-size: 14px">Default</button><br>
                        <b>Mouse</b> : Move in browser window to aim<br>
                        (steer and head angle)<br>
                        (similar to an FPS game)<br>

                        <h3>Head:</h3>
                        <b>T</b> : Move Head Up<br>
                        <b>G</b> : Move Head Down<br>

                        <h3>Lift:</h3>
                        <b>R</b> : Move Lift Up<br>
                        <b>F</b>: Move Lift Down<br>
                        <h3>General:</h3>
                        <b>Shift</b> : Hold to Move Faster (Driving, Head and Lift)<br>
                        <b>Alt</b> : Hold to Move Slower (Driving, Head and Lift)<br>
                        <b>P</b> : Toggle Free Play mode: <button id="freeplayId" onClick=onFreeplayButtonClicked(this) style="font-size: 14px">Default</button><br>
                        <h3>Play Animations</h3>
                        <b>0 .. 9</b> : Play Animation mapped to that key<br>
                        <h3>Talk</h3>
                        <b>Space</b> : Say <input type="text" name="sayText" id="sayTextId" value="""" + flask_app.remote_control_vector.text_to_say + """" onchange=handleTextInput(this)>
                    </td>
                    <td width=30></td>
                    <td valign=top>
                    <h2>Animation key mappings:</h2>
                    """ + get_anim_sel_drop_downs() + """<br>
                    </td>
                </tr>
            </table>

            <script type="text/javascript">
                var gLastClientX = -1
                var gLastClientY = -1
                var gIsMouseLookEnabled = """ + to_js_bool_string(_is_mouse_look_enabled_by_default) + """
                var gIsFreeplayEnabled = false
                var gUserAgent = window.navigator.userAgent;
                var gIsMicrosoftBrowser = gUserAgent.indexOf('MSIE ') > 0 || gUserAgent.indexOf('Trident/') > 0 || gUserAgent.indexOf('Edge/') > 0;
                var gSkipFrame = false;

                if (gIsMicrosoftBrowser) {
                    document.getElementById("vectorImageMicrosoftWarning").style.display = "block";
                }

                function postHttpRequest(url, dataSet)
                {
                    var xhr = new XMLHttpRequest();
                    xhr.open("POST", url, true);
                    xhr.send( JSON.stringify( dataSet ) );
                }

                function updateVector()
                {
                    console.log("Updating log")
                    if (gIsMicrosoftBrowser && !gSkipFrame) {
                        // IE doesn't support MJPEG, so we need to ping the server for more images.
                        // Though, if this happens too frequently, the controls will be unresponsive.
                        gSkipFrame = true;
                        document.getElementById("vectorImageId").src="vectorImage?" + (new Date()).getTime();
                    } else if (gSkipFrame) {
                        gSkipFrame = false;
                    }
                    var xhr = new XMLHttpRequest();
                    xhr.onreadystatechange = function() {
                        if (xhr.readyState == XMLHttpRequest.DONE) {
                            document.getElementById("DebugInfoId").innerHTML = xhr.responseText
                        }
                    }

                    xhr.open("POST", "updateVector", true);
                    xhr.send( null );
                }
                setInterval(updateVector , 60);

                function updateButtonEnabledText(button, isEnabled)
                {
                    button.firstChild.data = isEnabled ? "Enabled" : "Disabled";
                }

                function onMouseLookButtonClicked(button)
                {
                    gIsMouseLookEnabled = !gIsMouseLookEnabled;
                    updateButtonEnabledText(button, gIsMouseLookEnabled);
                    isMouseLookEnabled = gIsMouseLookEnabled
                    postHttpRequest("setMouseLookEnabled", {isMouseLookEnabled})
                }

                function onFreeplayButtonClicked(button)
                {
                    gIsFreeplayEnabled = !gIsFreeplayEnabled;
                    updateButtonEnabledText(button, gIsFreeplayEnabled);
                    isFreeplayEnabled = gIsFreeplayEnabled
                    postHttpRequest("setFreeplayEnabled", {isFreeplayEnabled})
                }

                updateButtonEnabledText(document.getElementById("mouseLookId"), gIsMouseLookEnabled);
                updateButtonEnabledText(document.getElementById("freeplayId"), gIsFreeplayEnabled);

                function handleDropDownSelect(selectObject)
                {
                    selectedIndex = selectObject.selectedIndex
                    itemName = selectObject.name
                    postHttpRequest("dropDownSelect", {selectedIndex, itemName});
                }

                function handleKeyActivity (e, actionType)
                {
                    var keyCode  = (e.keyCode ? e.keyCode : e.which);
                    var hasShift = (e.shiftKey ? 1 : 0)
                    var hasCtrl  = (e.ctrlKey  ? 1 : 0)
                    var hasAlt   = (e.altKey   ? 1 : 0)

                    if (actionType=="keyup")
                    {
                        if (keyCode == 80) // 'P'
                        {
                            // Simulate a click of the freeplay button
                            onFreeplayButtonClicked(document.getElementById("freeplayId"))
                        }
                        else if (keyCode == 81) // 'Q'
                        {
                            // Simulate a click of the mouse look button
                            onMouseLookButtonClicked(document.getElementById("mouseLookId"))
                        }
                    }

                    postHttpRequest(actionType, {keyCode, hasShift, hasCtrl, hasAlt})
                }

                function handleMouseActivity (e, actionType)
                {
                    var clientX = e.clientX / document.body.clientWidth  // 0..1 (left..right)
                    var clientY = e.clientY / document.body.clientHeight // 0..1 (top..bottom)
                    var isButtonDown = e.which && (e.which != 0) ? 1 : 0
                    var deltaX = (gLastClientX >= 0) ? (clientX - gLastClientX) : 0.0
                    var deltaY = (gLastClientY >= 0) ? (clientY - gLastClientY) : 0.0
                    gLastClientX = clientX
                    gLastClientY = clientY

                    postHttpRequest(actionType, {clientX, clientY, isButtonDown, deltaX, deltaY})
                }

                function handleTextInput(textField)
                {
                    textEntered = textField.value
                    postHttpRequest("sayText", {textEntered})
                }

                document.addEventListener("keydown", function(e) { handleKeyActivity(e, "keydown") } );
                document.addEventListener("keyup",   function(e) { handleKeyActivity(e, "keyup") } );

                document.addEventListener("mousemove",   function(e) { handleMouseActivity(e, "mousemove") } );

                function stopEventPropagation(event)
                {
                    if (event.stopPropagation)
                    {
                        event.stopPropagation();
                    }
                    else
                    {
                        event.cancelBubble = true
                    }
                }

                document.getElementById("sayTextId").addEventListener("keydown", function(event) {
                    stopEventPropagation(event);
                } );
                document.getElementById("sayTextId").addEventListener("keyup", function(event) {
                    stopEventPropagation(event);
                } );
            </script>

        </body>
    </html>
    """


def get_annotated_image():
    # TODO: Update to use annotated image (add annotate module)
    image = flask_app.remote_control_vector.vector.camera.latest_image
    if image is None:
        return _default_camera_image

    return image


def streaming_video():
    """Video streaming generator function"""
    while True:
        if flask_app.remote_control_vector:
            image = get_annotated_image()

            img_io = io.BytesIO()
            image.save(img_io, 'PNG')
            img_io.seek(0)
            yield (b'--frame\r\n'
                   b'Content-Type: image/png\r\n\r\n' + img_io.getvalue() + b'\r\n')
        else:
            time.sleep(.1)


def serve_single_image():
    if flask_app.remote_control_vector:
        image = get_annotated_image()
        if image:
            return flask_helpers.serve_pil_image(image)

    return flask_helpers.serve_pil_image(_default_camera_image)


def is_microsoft_browser(req):
    agent = req.user_agent.string
    return 'Edge/' in agent or 'MSIE ' in agent or 'Trident/' in agent


@flask_app.route("/vectorImage")
def handle_vectorImage():
    if is_microsoft_browser(request):
        return serve_single_image()
    return flask_helpers.stream_video(streaming_video)


def handle_key_event(key_request, is_key_down):
    message = json.loads(key_request.data.decode("utf-8"))
    if flask_app.remote_control_vector:
        flask_app.remote_control_vector.handle_key(key_code=(message['keyCode']), is_shift_down=message['hasShift'],
                                                   is_alt_down=message['hasAlt'], is_key_down=is_key_down)
    return ""


@flask_app.route('/mousemove', methods=['POST'])
def handle_mousemove():
    """Called from Javascript whenever mouse moves"""
    message = json.loads(request.data.decode("utf-8"))
    if flask_app.remote_control_vector:
        flask_app.remote_control_vector.handle_mouse(mouse_x=(message['clientX']), mouse_y=message['clientY'])
    return ""


@flask_app.route('/setMouseLookEnabled', methods=['POST'])
def handle_setMouseLookEnabled():
    """Called from Javascript whenever mouse-look mode is toggled"""
    message = json.loads(request.data.decode("utf-8"))
    if flask_app.remote_control_vector:
        flask_app.remote_control_vector.set_mouse_look_enabled(is_mouse_look_enabled=message['isMouseLookEnabled'])
    return ""


@flask_app.route('/setFreeplayEnabled', methods=['POST'])
def handle_setFreeplayEnabled():
    """Called from Javascript whenever freeplay mode is toggled on/off"""
    message = json.loads(request.data.decode("utf-8"))
    if flask_app.remote_control_vector:
        isFreeplayEnabled = message['isFreeplayEnabled']
        connection = flask_app.remote_control_vector.vector.conn
        connection.request_control(enable=(not isFreeplayEnabled))
    return ""


@flask_app.route('/keydown', methods=['POST'])
def handle_keydown():
    """Called from Javascript whenever a key is down (note: can generate repeat calls if held down)"""
    return handle_key_event(request, is_key_down=True)


@flask_app.route('/keyup', methods=['POST'])
def handle_keyup():
    """Called from Javascript whenever a key is released"""
    return handle_key_event(request, is_key_down=False)


@flask_app.route('/dropDownSelect', methods=['POST'])
def handle_dropDownSelect():
    """Called from Javascript whenever an animSelector dropdown menu is selected (i.e. modified)"""
    message = json.loads(request.data.decode("utf-8"))

    item_name_prefix = "animSelector"
    item_name = message['itemName']

    if flask_app.remote_control_vector and item_name.startswith(item_name_prefix):
        item_name_index = int(item_name[len(item_name_prefix):])
        flask_app.remote_control_vector.set_anim(item_name_index, message['selectedIndex'])

    return ""


@flask_app.route('/sayText', methods=['POST'])
def handle_sayText():
    """Called from Javascript whenever the saytext text field is modified"""
    message = json.loads(request.data.decode("utf-8"))
    if flask_app.remote_control_vector:
        flask_app.remote_control_vector.text_to_say = message['textEntered']
    return ""


@flask_app.route('/updateVector', methods=['POST'])
def handle_updateVector():
    if flask_app.remote_control_vector:
        flask_app.remote_control_vector.update()
        action_queue_text = ""
        i = 1
        for action in flask_app.remote_control_vector.action_queue:
            action_queue_text += str(i) + ": " + flask_app.remote_control_vector.action_to_text(action) + "<br>"
            i += 1

        return "Action Queue:<br>" + action_queue_text + "\n"
    return ""


def run():
    args = util.parse_command_args()

    with anki_vector.AsyncRobot(args.serial, enable_camera_feed=True) as robot:
        flask_app.remote_control_vector = RemoteControlVector(robot)

        robot.behavior.drive_off_charger()

        flask_helpers.run_flask(flask_app)


if __name__ == '__main__':
    try:
        run()
    except KeyboardInterrupt as e:
        pass
    except anki_vector.exceptions.VectorConnectionException as e:
        sys.exit("A connection error occurred: %s" % e)


 

评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值