diff --git a/.gitignore b/.gitignore index 48db89c4..36978665 100644 --- a/.gitignore +++ b/.gitignore @@ -29,6 +29,7 @@ var/ # before PyInstaller builds the exe, so as to inject date/other infos into it. *.manifest *.spec +*.pem # Installer logs pip-log.txt diff --git a/doc/tutorials/Setting-Up-a-New-Robot.md b/doc/tutorials/Setting-Up-a-New-Robot.md index c56e93e6..bba37e4b 100644 --- a/doc/tutorials/Setting-Up-a-New-Robot.md +++ b/doc/tutorials/Setting-Up-a-New-Robot.md @@ -1,3 +1,121 @@ # Setting Up A Robot STUB -- need to define interfaces that would be attached to + + +# MISTY robot (API based) V0.1 + +In order to be able to run HARMONI on an API based robotic platform, a few modifications are needed. + +First of all, the IP of the robot, which is needed in most of the _service file, is saved as a ros parameter in the initialization of the harmoni_core/harmoni_common_lib/service_manager.py class; and then retrieved directly in the implementation of the actual service class. + +Here are listed the packages and the files that have been modified and added in order to make HARMONI work on the robot. + +The docker-compose-full.yml file have been modified in order to be able to run docker in distributed mode, and enable communications between harmoni inside the docker and external networks. In particular, port forwarding paths are necessary, and the network ros_net must not be specified. The network mode must be set to "host". +## Usage + +## Parameters + +## Testing + +## References + + +### harmoni_actuators + +#### harmoni_face + +the service of this package does not need to be modified. + +A "caller" node file has been added, it's only role is to call the "Display web page" API of the robot. + +The node needs to be added to the test and launch files. + +The face has been adapted to better fit the robot display, modifying the web/src/js/face.js file. + +#### harmoni_gesture + +This part of the implementation is still not completed, so the documentation will be adjusted. + +Two files have been added to the package: gesture_service_misty.py and misty_gesture_interface.py. + +The first one is meant to replace the normal gesture_service file, and thus has been substituted in the launch and test files. + +In it, the DO operation that was used before for ordering the robot to make a gesture, has been replaced by a REQUEST operation, which is more in line with the API nature of the interaction. + +It is important to remember that when a REQUEST operation is called, it is necessary to set the flag response_received = True when the API call is over. + +The parse_gesture_misty() function has been added to the file, and there the gesture described in the xml files in the data folder are processed and for each command composing the gesture an API call is made. A function which prepares the url and the payload for each possible API call has been added to the file, and then the request is forwared to the robot. If the call for any reason timed-out, a second call is made with a longer timeout (sometimes connection problem can happen), and if also that one fails, the function return setting the done flag to false. + +Currently the misty_gesture_interface file only publishes some required parameters to ros, but we are planning on moving the api calls described above in this file, in the attempt to better resamble the previous work done on qt. The documentation will be updated consequently. + +The xml files contained in data/Misty folder resemble the format for the API calls of the robot, adding only a time parameter which is used for choosing after how much time each single movemente should be performed. Be aware that the time for sending the call is not calculated in a strict manner, so the execution time may consequently be not completely precise. + +#### harmoni_speaker +TODO + + +In this package, the speaker_service_misty.py has been added, and the launch and test files have been modified consequently. + +In it, the DO operation that was used before for ordering the robot to play a recording, has been replaced by a REQUEST operation, which is more in line with the API nature of the interaction. + +It is important to remember that when a REQUEST operation is called, it is necessary to set the flag response_received = True when the API call is over. + +Since Misty needs to recieve the files in base64 format, the file_path_to_audio_data function has been modified to convert the wav file in that format. Currently, the implementation of the robot call providing directly the data is not implemented, and the data must always be given to the node as a wav file. In case free data are provided, the node assumes that it is the tts package calling the speaker, and so retrieve the data where the tts node saves the file as a temporary .wav file. + +The function prepares the url and the payload for the API call has been added to the file, and then the request is forwared to the robot. If the call for any reason timed-out, a second call is made with a longer timeout (sometimes connection problem can happen), and if also that one fails, the function return setting the done flag to false. + +It is important to note that, since the wav file can be heavier than the maximum file size for the parameters, the payload must be passed as data and not as a parameter to the API call. + +An argument robot_ip is provided in the service_manager class. +In misty, the file must either be encoded in base64 or provided as file. In the speaker_service_misty class the data are converted after being uploaded from a .wav file, either that they are provided to the class directly or via path. + +To reproduce the file the flag ImmediatelyApply has been set to True in the API call. +The flag OverwriteExisting makes the new file replace the old, since the file is always called with the same name. +Another possible way of doing this was to separate into two different API call, SaveAudio with ImmediatelyApply set to false and then PlayAudioClip. + +When replacing harmoni's speaker_service with speaker_service_misty, the type of action that the class performs has been changed from DO to REQUEST: this was done because conceptually we are requesting to an external service (the robot) to perform an action, and not performing the action directly inside HARMONI. This change requires to modify everything that interfaces with the speaker service + +Be sure to add the correct robot_ip argument to the launch file if you want to test the case with no exception, or to add a mock one if you to test the exception. + +#### harmoni_tts + +this package has not been relevantly modified. + +#### harmoni_web + +this package has not been relevantly modified. + +### harmoni_core + +#### harmoni_common_lib + +the only relevant modification is the addition of the IP of the robot, saved as a ros parameter in the initialization of the harmoni_core/harmoni_common_lib/service_manager.py class. In future this parameter is expected to be passed throught a configuration file. (TODO) + +#### harmoni_decision + +the configuration file has been modified, adding the parameter gesture: ["misty"] that was previusly commented. + +#### harmoni_pattern + +two interactions have been added: pattern_scripting/misty_demo.json pattern_scripting/misty_interaction.json and (one for tests and one for the actual interactions). The interactions name must be added also to configuration file too. currently the demos are sort of "mock-ups" of real interactions. + +The node has not been relevantly modified. + +### harmoni_detectors + +this package has not been modified yet. + +### harmoni_dialogues + +this package has not been modified yet. + +### harmoni_sensors + +this package has not been modified yet. + + +### HARMONI face: Using Misty API +An argument robot_ip is provided in the service_manager class. + +A class misty_caller has been added to implement automatically a request to display the webView of harmoni face on robot's screen. diff --git a/docker-run-command.md b/docker-run-command.md new file mode 100644 index 00000000..9d83a5c3 --- /dev/null +++ b/docker-run-command.md @@ -0,0 +1,17 @@ + +sudo docker run -it --name harmoni_core --network host --init \ +-e "ROS_DISTRO=noetic" \ +-e "CATKIN_WS=harmoni_catkin_ws" \ +-e "IS_DOCKER_ENV=true" \ +-v /home/afar/harmoni_catkin_ws/src/HARMONI/dockerfiles/config/setup_script.sh:"/root/.setup_script.sh" \ +-v /home/afar/harmoni_catkin_ws/src/HARMONI/dockerfiles/config/dev_setup_script.sh:"/root/.dev_setup_script.sh" \ +-v /home/afar/harmoni_catkin_ws/src/HARMONI/dockerfiles/config/asoundrc:"/root/.asoundrc" \ +-v /tmp/.X11-unix:/tmp/.X11-unix \ +-v /home/afar/.ssh/:/root/.ssh:ro \ +-v /home/afar/.vimrc:/root/.vimrc:ro \ +-v /home/afar/.vim/:/root/.vim/:ro \ +-v /etc/timezone:/etc/timezone:ro \ +-v /etc/localtime:/etc/localtime:ro \ +-v /home/afar/harmoni_catkin_ws/src/HARMONI/:/root/local_mount/HARMONI/ \ +-w "/root/harmoni_catkin_ws/src/HARMONI" \ +harmoniteam/harmoni:noetic-full bash \ No newline at end of file diff --git a/dockerfiles/compose_templates/docker-compose-full-misty.yml b/dockerfiles/compose_templates/docker-compose-full-misty.yml new file mode 100644 index 00000000..8e3fa1f6 --- /dev/null +++ b/dockerfiles/compose_templates/docker-compose-full-misty.yml @@ -0,0 +1,62 @@ +version: "3.7" + +services: + harmoni_full: + container_name: harmoni_full + build: + context: . + dockerfile: dockerfiles/harmoni/kinetic/full/dockerfile #substitute $ROS_DISTRO with kinetic for now. + network: host + image: harmoniteam/harmoni:kinetic-full + init: true + environment: +# DISPLAY: $DISPLAY +# QT_GRAPHICSSYSTEM: native + ROS_DISTRO: kinetic +# ROS_MASTER_URI: http://172.18.3.4:11311 + IS_DOCKER_ENV: "true" +# ROS_HOSTNAME: harmoni_full + CATKIN_WS: harmoni_catkin_ws +# privileged: true + network_mode: "host" +# networks: +# ros_net: +# ipv4_address: 172.18.3.4 +# hostname: harmoni_full + ports: + - "11312:11312" + - "33691:33691" + - "8081:8081" #harmoni_face + - "8082:8082" #harmoni_web + devices: +# - /dev/dri:/dev/dri + - /dev/snd:/dev/snd +# - /dev/video0:/dev/video0 + volumes: + # - harmoni_catkin_ws:/root/harmoni_catkin_ws + - ../HARMONI/:/root/local_mount/HARMONI/ + # Configuration + - ~/.aws:/root/.aws/ + - ./dockerfiles/config/.asoundrc:/root/.asoundrc + - ~/.gcp/private-keys.json:/root/.gcp/private-keys.json + # Other + - /home/.ssh/:/root/.ssh:ro + - /tmp/.X11-unix:/tmp/.X11-unix + - /etc/timezone:/etc/timezone:ro + - /etc/localtime:/etc/localtime:ro + working_dir: /root/harmoni_catkin_ws/src/HARMONI + + command: + tail -f /dev/null + # bash -c "terminator -ue \"echo 'Entering harmoni_full Container... \\n start with roscore and rlharmoniservices' && bash\"" + +#networks: +# ros_net: +# driver: bridge +# ipam: +# driver: default +# config: +# - subnet: 172.18.3.0/24 +# volumes: +# harmoni_catkin_ws: +# wav2letter: diff --git a/harmoni_actuators/harmoni_face/launch/face_service_misty.launch b/harmoni_actuators/harmoni_face/launch/face_service_misty.launch new file mode 100755 index 00000000..79c6de10 --- /dev/null +++ b/harmoni_actuators/harmoni_face/launch/face_service_misty.launch @@ -0,0 +1,21 @@ + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/harmoni_actuators/harmoni_face/nodes/http_server_runner.py b/harmoni_actuators/harmoni_face/nodes/http_server_runner.py index 393b33e1..eeed15bc 100755 --- a/harmoni_actuators/harmoni_face/nodes/http_server_runner.py +++ b/harmoni_actuators/harmoni_face/nodes/http_server_runner.py @@ -42,6 +42,12 @@ def is_int(value): if is_int(args[PORT_IDX]): port = args[PORT_IDX] command += ["-p", port] + #command += ["-S"] + + #cert_path = args[DIR_IDX] + "/server_cert.pem" + #key_path = args[DIR_IDX] + "/server_key.pem" + #command += ["-C", cert_path] + #command += ["-K", key_path] p = subprocess.Popen(command) atexit.register(p.terminate) diff --git a/harmoni_actuators/harmoni_face/src/harmoni_face/face_service.py b/harmoni_actuators/harmoni_face/src/harmoni_face/face_service.py old mode 100755 new mode 100644 index 25b0e1f9..e0214446 --- a/harmoni_actuators/harmoni_face/src/harmoni_face/face_service.py +++ b/harmoni_actuators/harmoni_face/src/harmoni_face/face_service.py @@ -12,79 +12,122 @@ # Specific Imports from harmoni_common_lib.constants import ActuatorNameSpace from harmoni_face.msg import FaceRequest +from harmoni_face.face_client import Face +from std_msgs.msg import String, Bool +from geometry_msgs.msg import Point from threading import Timer +from collections import deque import json import ast import os +class NoseService(HarmoniServiceManager): + """ + Nose service + """ + + def __init__(self, name, param_nose, face): + """Init of the nose service and setup as Harmoni service + Args: + name (str): service name + param (obj): list of parameters useful for the service + gaze_speed (int): velocity of the gaze command (milliseconds) + """ + super().__init__(name) + self.name = name + self.service_id = hf.get_child_id(self.name) + self.face = face + if not self.face.connected: + self.face.setup_ros() + self.face_nose_pub = rospy.Publisher( + ActuatorNameSpace.face.value + self.service_id + "/expressing/nose", + FaceRequest, + queue_size=1, + ) + self.state = State.INIT + return + + def do(self, data): + """Do expression of nose in web face + Args: + data ([str]): stringified json from tts results + Returns: + response (int): whether SUCCESS of FAIL + message (str): result message + """ + self.actuation_completed = False + try: + self.state = State.REQUEST + face_bool = self.face.face_sequential_request(data) + rospy.loginfo("Completed Expression") + self.state = State.SUCCESS + self.actuation_completed = True + self.result_msg="" + except Exception: + self.state = State.FAILED + self.actuation_completed = True + self.result_msg="" + return {"response": self.state, "message": self.result_msg} + + def send_face_nose_request(self): + """ Send the request to the web page""" + rospy.loginfo("Sending request to webpage of the face eyes") + self.face_nose_pub.publish(self.face_request_nose) + return + class EyesService(HarmoniServiceManager): """ - TODO: Eyes service + Eyes service """ - def __init__(self, name, param): + def __init__(self, name, param_eyes, face): + """Init of the Eyes service and setup as Harmoni service + Args: + name (str): service name + param (obj): list of parameters useful for the service + gaze_speed (int): velocity of the gaze command (milliseconds) + """ super().__init__(name) - """ Initialization of variables and face parameters """ self.name = name - self.gaze_speed = param["gaze_speed"] + self.face = face + if not self.face.connected: + self.face.setup_ros() + self.gaze_speed = param_eyes["gaze_speed"] + self.timer_interval = param_eyes["timer_interval"] self.service_id = hf.get_child_id(self.name) - """ Setup the face """ - self.setup_face() - """ Setup the publisher for the face """ - self.face_pub = rospy.Publisher( - ActuatorNameSpace.face.value + self.service_id + "/expressing", + self.face_eyes_pub = rospy.Publisher( + ActuatorNameSpace.face.value + self.service_id + "/expressing/eyes", FaceRequest, queue_size=1, ) - """Setup the face service as server """ self.state = State.INIT return def do(self, data): - """ Do the expression""" + """Do expression of eyes in web face + Args: + data ([str]): stringified json from tts results + Returns: + response (int): whether SUCCESS of FAIL + message (str): result message + """ self.actuation_completed = False - [valid_face_expression, visemes] = self.get_face_data(data) + [gaze_bool, gaze_data] = self._get_gaze_data(data) try: self.state = State.REQUEST - - if visemes != []: - viseme_ids = list(map(lambda b: b["id"], visemes)) - viseme_times = list(map(lambda b: b["start"], visemes)) - self.face_request = FaceRequest( - visemes=viseme_ids, viseme_ms=self.speed_viseme, times=viseme_times - ) - t = Timer(self.timer_interval, self.send_face_request) - t.start() - start_time = rospy.Time.now() - rospy.loginfo("The last viseme lasts %i" % viseme_times[-1]) - time_sleep = int(viseme_times[-1]) + self.min_duration_viseme - rospy.sleep(time_sleep) - if valid_face_expression != []: - rospy.loginfo("Valid face expression not null") - if len(valid_face_expression) > 1: - for ind, f in range(0, len(valid_face_expression) - 1): - rospy.loginfo("The valid expression is %s" % f) - aus = list(map(lambda s: s[2:], f["aus"])) - au_ms = f["au_ms"] * 1000 - self.face_request = FaceRequest( - aus=aus, au_degrees=f["au_degrees"], au_ms=au_ms - ) - t = Timer(self.timer_interval, self.send_face_request) - t.start() - start_time = rospy.Time.now() - aus = list(map(lambda s: s[2:], valid_face_expression[-1]["aus"])) - au_ms = valid_face_expression[-1]["au_ms"] * 1000 - self.face_request = FaceRequest( - aus=aus, - au_degrees=valid_face_expression[-1]["au_degrees"], - au_ms=au_ms, - ) - t = Timer(self.timer_interval, self.send_face_request) + # The gaze eyes are played first + if gaze_bool: + rospy.loginfo("Valid gaze") + rospy.loginfo(f"The valid gaze is {gaze_data}") + self.face_request_eyes = FaceRequest( + retarget_gaze=gaze_bool, gaze_target=gaze_data, hold_gaze=2 + ) + t = Timer(self.timer_interval, self.send_face_eyes_request) t.start() start_time = rospy.Time.now() - rospy.loginfo("The last facial expression") - rospy.sleep(valid_face_expression[-1]["au_ms"]) + face_bool = self.face.face_sequential_request(data) + rospy.loginfo("Completed Expression") self.state = State.SUCCESS self.actuation_completed = True self.result_msg="" @@ -92,132 +135,69 @@ def do(self, data): self.state = State.FAILED self.actuation_completed = True self.result_msg="" - rospy.loginfo("Completed Expression") return {"response": self.state, "message": self.result_msg} - def setup_face(self): - """ Setup the face """ - rospy.loginfo("Setting up the %s eyes" % self.name) - rospy.loginfo("Checking that face is connected to ROS websocket") - #rospy.wait_for_service("/harmoni/actuating/face/is_connected") - rospy.loginfo("Done, face is connected to ROS websocket") - [ - self.face_expression, - self.face_expression_names, - ] = self.get_facial_expressions_list() - self.visemes = [ - "BILABIAL", - "LABIODENTAL", - "INTERDENTAL", - "DENTAL_ALVEOLAR", - "POSTALVEOLAR", - "VELAR_GLOTTAL", - "CLOSE_FRONT_VOWEL", - "OPEN_FRONT_VOWEL", - "MID_CENTRAL_VOWEL", - "OPEN_BACK_VOWEL", - "CLOSE_BACK_VOWEL", - "IDLE", - ] - return - - def send_face_request(self): + def send_face_eyes_request(self): """ Send the request to the web page""" - rospy.loginfo("Sending request to webpage of the face") - self.face_pub.publish(self.face_request) + rospy.loginfo("Sending request to webpage of the face eyes") + self.face_eyes_pub.publish(self.face_request_eyes) return - def get_facial_expressions_list(self): - """ Get facial expression list from the resource file""" - facial_expression_list = [] - face_expression_au = {} - base_dir = os.path.dirname(__file__) - with open( - base_dir + "/resource/cordial_face_expression.json", "r" - ) as json_file: - data = json.load(json_file) - for facial_expression in data: - facial_expression_list.append(facial_expression) - au_name = str(facial_expression) - aus = [] - for dofs in data[facial_expression]["dofs"]: - aus.append(str(dofs)) - for keyframe in data[facial_expression]["keyframes"]: - au_degrees = keyframe["pose"] - au_ms = keyframe["time"] - face_expression_au[au_name] = { - "aus": aus, - "au_degrees": au_degrees, - "au_ms": au_ms, - } - return face_expression_au, facial_expression_list - - def get_face_data(self, data): - """ Get the validated data of the face""" - # rospy.loginfo("The face expressions available are %s" % self.face_expression) - + def _get_gaze_data(self, data): + """Get target gaze data""" data = ast.literal_eval(data) if "behavior_data" in data: behavior_data = ast.literal_eval(data["behavior_data"]) else: behavior_data = data - viseme_set = [] - facial_expression = [] - sentence = [] + gaze_bool = False + gaze_target = Point() for b in behavior_data: if "id" in b.keys(): - if b["id"] in self.visemes: - viseme_set.append(b) - if b["id"] in self.face_expression_names: - facial_expression.append(b) - if "character" in b.keys(): - sentence.append(b["value"]) - # viseme = list(filter(lambda b: b["id"] in self.visemes, data)) - # facial_expression = list(filter(lambda b: b["id"] in self.face_expression_names, data)) - rospy.loginfo("These facial expressions include %s" % facial_expression) - ordered_facial_data = list( - sorted(facial_expression, key=lambda face: face["start"]) - ) - validated_face_expr = [] - for fexp in ordered_facial_data: - validated_face_expr.append(self.face_expression[fexp["id"]]) - viseme_set = [] - rospy.loginfo("The validated facial expressions are %s" % validated_face_expr) - rospy.loginfo("The validated visemes are %s" % viseme_set) - print("Finished getting face data for sentence:", sentence) - return (validated_face_expr, viseme_set) - - + if b["id"] == "target": + gaze_target.x = b["point"][0] + gaze_target.y = b["point"][1] + gaze_target.z = b["point"][2] + gaze_bool = True + return gaze_bool, gaze_target + + class MouthService(HarmoniServiceManager): """ Mouth service """ - def __init__(self, name, param): + def __init__(self, name, param_mouth, face): + """Init of the Mouth service and setup as Harmoni service + Args: + name (str): service name + param_mouth (obj): list of parameters useful for the service + min_duration_viseme (int): minimum amount of time a viseme can last (seconds) + speed_viseme (int): velocity of the viseme execution (milliseconds) + timer_interval (int): interval of time between two consecutive visemes (seconds) + """ super().__init__(name) - """ Initialization of variables and face parameters """ rospy.loginfo("MouthService initializing") self.name = name - self.min_duration_viseme = param["min_duration_viseme"] - self.speed_viseme = param["speed_viseme"] - self.timer_interval = param["timer_interval"] + self.face = face + if not self.face.connected: + self.face.setup_ros() + self.min_duration_viseme = param_mouth["min_duration_viseme"] + self.speed_viseme = param_mouth["speed_viseme"] + self.timer_interval = param_mouth["timer_interval"] self.service_id = hf.get_child_id(self.name) - self.setup_face() - self.face_pub = rospy.Publisher( - ActuatorNameSpace.face.value + self.service_id + "/expressing", + self.face_mouth_pub = rospy.Publisher( + ActuatorNameSpace.face.value + self.service_id + "/expressing/mouth", FaceRequest, queue_size=1, ) - """Setup the face service as server """ self.state = State.INIT return def do(self, data): """Do expression in web face - Args: data ([str]): stringified json from tts results - Returns: response (int): whether SUCCESS of FAIL message (str): result message @@ -225,185 +205,101 @@ def do(self, data): rospy.loginfo("Do expressions") self.actuation_completed = False self.result_msg="" - [valid_face_expression, visemes] = self.get_face_data(data) + [viseme_bool, visemes] = self._get_viseme_data(data) try: self.state = State.REQUEST - - if visemes != []: + # The visemes are played at first + if viseme_bool: viseme_ids = list(map(lambda b: b["id"], visemes)) viseme_times = list(map(lambda b: b["start"], visemes)) - self.face_request = FaceRequest( + self.face_mouth_request = FaceRequest( visemes=viseme_ids, viseme_ms=self.speed_viseme, times=viseme_times ) - - t = Timer(self.timer_interval, self.send_face_request) + t = Timer(self.timer_interval, self.send_face_mouth_request) t.start() start_time = rospy.Time.now() rospy.loginfo("The last viseme lasts %i" % viseme_times[-1]) time_sleep = int(viseme_times[-1]) + self.min_duration_viseme rospy.sleep(time_sleep) - if valid_face_expression != []: - rospy.loginfo("Valid face expression not null") - if len(valid_face_expression) > 1: - for ind, f in range(0, len(valid_face_expression) - 1): - rospy.loginfo("The valid expression is %s" % f) - aus = list(map(lambda s: s[2:], f["aus"])) - au_ms = f["au_ms"] * 1000 - self.face_request = FaceRequest( - aus=aus, au_degrees=f["au_degrees"], au_ms=au_ms - ) - t = Timer(self.timer_interval, self.send_face_request) - t.start() - start_time = rospy.Time.now() - aus = list(map(lambda s: s[2:], valid_face_expression[-1]["aus"])) - au_ms = valid_face_expression[-1]["au_ms"] * 1000 - self.face_request = FaceRequest( - aus=aus, - au_degrees=valid_face_expression[-1]["au_degrees"], - au_ms=au_ms, - ) - t = Timer(self.timer_interval, self.send_face_request) - t.start() - start_time = rospy.Time.now() - rospy.loginfo("The last facial expression") - rospy.sleep(valid_face_expression[-1]["au_ms"]) + face_bool = self.face.face_sequential_request(data) self.state = State.SUCCESS self.actuation_completed = True + rospy.loginfo("Completed Expression") except Exception: self.state = State.FAILED self.actuation_completed = True - rospy.loginfo("Completed Expression") return {"response": self.state, "message": self.result_msg} - def setup_face(self): - """Setup the face, waiting for the connection with the web page - """ - rospy.loginfo("Setting up the %s mouth" % self.name) - rospy.loginfo("Checking that face is connected to ROS websocket") - rospy.wait_for_service("/harmoni/actuating/face/is_connected") - rospy.loginfo("Done, face is connected to ROS websocket") - [ - self.face_expression, - self.face_expression_names, - ] = self.get_facial_expressions_list() - self.visemes = [ - "BILABIAL", - "LABIODENTAL", - "INTERDENTAL", - "DENTAL_ALVEOLAR", - "POSTALVEOLAR", - "VELAR_GLOTTAL", - "CLOSE_FRONT_VOWEL", - "OPEN_FRONT_VOWEL", - "MID_CENTRAL_VOWEL", - "OPEN_BACK_VOWEL", - "CLOSE_BACK_VOWEL", - "IDLE", - ] - return - def send_face_request(self): + def send_face_mouth_request(self): """ Send the request to the web page""" rospy.loginfo("Sending request to webpage of the face") - self.face_pub.publish(self.face_request) + self.face_mouth_pub.publish(self.face_mouth_request) return - def get_facial_expressions_list(self): - """ Get facial expression list from the resource file""" - facial_expression_list = [] - face_expression_au = {} - base_dir = os.path.dirname(__file__) - with open( - base_dir + "/resource/cordial_face_expression.json", "r" - ) as json_file: - data = json.load(json_file) - for facial_expression in data: - facial_expression_list.append(facial_expression) - au_name = str(facial_expression) - aus = [] - for dofs in data[facial_expression]["dofs"]: - aus.append(str(dofs)) - for keyframe in data[facial_expression]["keyframes"]: - au_degrees = keyframe["pose"] - au_ms = keyframe["time"] - face_expression_au[au_name] = { - "aus": aus, - "au_degrees": au_degrees, - "au_ms": au_ms, - } - return face_expression_au, facial_expression_list - - def get_face_data(self, data): + def _get_viseme_data(self, data): """ Get the validated data of the face""" - # rospy.loginfo("The face expressions available are %s" % self.face_expression) - + viseme_bool = False data = ast.literal_eval(data) if "behavior_data" in data: behavior_data = ast.literal_eval(data["behavior_data"]) else: behavior_data = data viseme_set = [] - facial_expression = [] - sentence = [] for b in behavior_data: if "id" in b.keys(): - if b["id"] in self.visemes: + if b["id"] in self.face.visemes: viseme_set.append(b) - if b["id"] in self.face_expression_names: - facial_expression.append(b) - if "character" in b.keys(): - sentence.append(b["value"]) - # viseme = list(filter(lambda b: b["id"] in self.visemes, data)) - - # facial_expression = list(filter(lambda b: b["id"] in self.face_expression_names, data)) - rospy.loginfo("These facial expressions include %s" % facial_expression) - - ordered_facial_data = list( - sorted(facial_expression, key=lambda face: face["start"]) - ) - - validated_face_expr = [] - for fexp in ordered_facial_data: - validated_face_expr.append(self.face_expression[fexp["id"]]) + viseme_bool = True + if viseme_bool: + for i in range(0, len(viseme_set) - 1): + viseme_set[i]["duration"] = ( + viseme_set[i + 1]["start"] - viseme_set[i]["start"] + ) - for i in range(0, len(viseme_set) - 1): - viseme_set[i]["duration"] = ( - viseme_set[i + 1]["start"] - viseme_set[i]["start"] + viseme_set[-1]["duration"] = self.min_duration_viseme + viseme_behaviors = list( + filter(lambda b: b["duration"] >= self.min_duration_viseme, viseme_set) ) - - viseme_set[-1]["duration"] = self.min_duration_viseme - - viseme_behaviors = list( - filter(lambda b: b["duration"] >= self.min_duration_viseme, viseme_set) - ) - ordered_visemes = list(sorted(viseme_behaviors, key=lambda b: b["start"])) - rospy.loginfo("The validated facial expressions are %s" % validated_face_expr) - rospy.loginfo("The validated visemes are %s" % viseme_set) - print("Finished getting face data for sentence:", sentence) - return (validated_face_expr, viseme_set) - - + ordered_visemes = list(sorted(viseme_behaviors, key=lambda b: b["start"])) + rospy.loginfo("The validated visemes are %s" % viseme_set) + return viseme_bool, viseme_set def main(): """Set names, collect params, and give service to server""" service_name = ActuatorNameSpace.face.name - instance_id = rospy.get_param("/instance_id") + instance_id = rospy.get_param("/instance_id") #default service_id_mouth = f"{service_name}_mouth_{instance_id}" service_id_eyes = f"{service_name}_eyes_{instance_id}" + service_id_nose = f"{service_name}_nose_{instance_id}" try: rospy.init_node(service_name) + param = rospy.get_param(service_name + "/" + instance_id + "_param") param_eyes = rospy.get_param(service_name + "/" + instance_id + "_param/eyes/") param_mouth = rospy.get_param(service_name + "/" + instance_id + "_param/mouth/") - s_eyes = EyesService(service_name + "_eyes_" + instance_id, param_eyes) - s_mouth = MouthService(service_name + "_mouth_" + instance_id, param_mouth) + param_nose = rospy.get_param(service_name + "/" + instance_id + "_param/nose/") + face = Face(ActuatorNameSpace.face.value, instance_id, param) + s_eyes = EyesService(service_name + "_eyes_" + instance_id, param_eyes, face) + s_mouth = MouthService(service_name + "_mouth_" + instance_id, param_mouth, face) + s_nose = NoseService(service_name + "_nose_" + instance_id, param_nose, face) service_server_eyes = HarmoniServiceServer(service_id_eyes, s_eyes) service_server_mouth = HarmoniServiceServer(service_id_mouth, s_mouth) + service_server_nose = HarmoniServiceServer(service_id_nose, s_nose) + + print(service_name) + print("****************************************************************************") + print(service_id_eyes) + print(service_id_mouth) + print(service_id_nose) + service_server_eyes.start_sending_feedback() service_server_mouth.start_sending_feedback() + service_server_nose.start_sending_feedback() + rospy.spin() except rospy.ROSInterruptException: pass if __name__ == "__main__": - main() + main() \ No newline at end of file diff --git a/harmoni_actuators/harmoni_face/src/harmoni_face/misty_caller.py b/harmoni_actuators/harmoni_face/src/harmoni_face/misty_caller.py new file mode 100755 index 00000000..b92b3440 --- /dev/null +++ b/harmoni_actuators/harmoni_face/src/harmoni_face/misty_caller.py @@ -0,0 +1,43 @@ +#!/usr/bin/env python3 + +import rospy +import requests +from requests.exceptions import Timeout +import socket +import sys +from subprocess import check_output + + +def is_int(value): + try: + int(value) + return True + except ValueError: + return False + +#DIR_IDX = 1 +PORT_IDX = 2 + +if __name__ == "__main__": + rospy.init_node("misty_caller") + robot_ip = rospy.get_param("/robot_ip") + + local_ip = str(check_output(['hostname', '--all-ip-addresses']).split()[0])[2:-1] + + args = sys.argv + if is_int(args[PORT_IDX]): + port = args[PORT_IDX] + + payload = {'URL': 'http://' + local_ip + ':' + port , + 'layer': 'DefaultWebViewLayer1'} + print(payload['URL']) + + try: + response = requests.post('http://{}/api/webviews/display'.format(robot_ip), + params = payload, + timeout = 4) + except Timeout: + rospy.logwarn("web_player failed: The ip of the robot appears unreachable") + while not rospy.is_shutdown(): + rospy.sleep(1) + #rospy.spin() \ No newline at end of file diff --git a/harmoni_actuators/harmoni_face/src/harmoni_face/misty_face_service_.py b/harmoni_actuators/harmoni_face/src/harmoni_face/misty_face_service_.py new file mode 100755 index 00000000..04d63846 --- /dev/null +++ b/harmoni_actuators/harmoni_face/src/harmoni_face/misty_face_service_.py @@ -0,0 +1,408 @@ +#!/usr/bin/env python3 + +# Common Imports +import rospy +import roslib + +from harmoni_common_lib.constants import State +from harmoni_common_lib.service_server import HarmoniServiceServer +from harmoni_common_lib.service_manager import HarmoniServiceManager +import harmoni_common_lib.helper_functions as hf + +# Specific Imports +from harmoni_common_lib.constants import ActuatorNameSpace +from harmoni_face.msg import FaceRequest +from threading import Timer +import json +import ast +import os + + +class EyesService(HarmoniServiceManager): + """ + TODO: Eyes service + """ + + def __init__(self, name, param): + super().__init__(name) + """ Initialization of variables and face parameters """ + self.name = name + self.gaze_speed = param["gaze_speed"] + self.service_id = hf.get_child_id(self.name) + """ Setup the face """ + self.setup_face() + """ Setup the publisher for the face """ + self.face_pub = rospy.Publisher( + ActuatorNameSpace.face.value + self.service_id + "/expressing", + FaceRequest, + queue_size=1, + ) + """Setup the face service as server """ + self.state = State.INIT + return + + def do(self, data): + """ Do the expression""" + self.actuation_completed = False + [valid_face_expression, visemes] = self.get_face_data(data) + try: + self.state = State.REQUEST + + if visemes != []: + viseme_ids = list(map(lambda b: b["id"], visemes)) + viseme_times = list(map(lambda b: b["start"], visemes)) + self.face_request = FaceRequest( + visemes=viseme_ids, viseme_ms=self.speed_viseme, times=viseme_times + ) + t = Timer(self.timer_interval, self.send_face_request) + t.start() + start_time = rospy.Time.now() + rospy.loginfo("The last viseme lasts %i" % viseme_times[-1]) + time_sleep = int(viseme_times[-1]) + self.min_duration_viseme + rospy.sleep(time_sleep) + if valid_face_expression != []: + rospy.loginfo("Valid face expression not null") + if len(valid_face_expression) > 1: + for ind, f in range(0, len(valid_face_expression) - 1): + rospy.loginfo("The valid expression is %s" % f) + aus = list(map(lambda s: s[2:], f["aus"])) + au_ms = f["au_ms"] * 1000 + self.face_request = FaceRequest( + aus=aus, au_degrees=f["au_degrees"], au_ms=au_ms + ) + t = Timer(self.timer_interval, self.send_face_request) + t.start() + start_time = rospy.Time.now() + aus = list(map(lambda s: s[2:], valid_face_expression[-1]["aus"])) + au_ms = valid_face_expression[-1]["au_ms"] * 1000 + self.face_request = FaceRequest( + aus=aus, + au_degrees=valid_face_expression[-1]["au_degrees"], + au_ms=au_ms, + ) + t = Timer(self.timer_interval, self.send_face_request) + t.start() + start_time = rospy.Time.now() + rospy.loginfo("The last facial expression") + rospy.sleep(valid_face_expression[-1]["au_ms"]) + self.state = State.SUCCESS + self.actuation_completed = True + self.result_msg="" + except Exception: + self.state = State.FAILED + self.actuation_completed = True + self.result_msg="" + rospy.loginfo("Completed Expression") + return {"response": self.state, "message": self.result_msg} + + def setup_face(self): + """ Setup the face """ + rospy.loginfo("Setting up the %s eyes" % self.name) + rospy.loginfo("Checking that face is connected to ROS websocket") + #rospy.wait_for_service("/harmoni/actuating/face/is_connected") + rospy.loginfo("Done, face is connected to ROS websocket") + [ + self.face_expression, + self.face_expression_names, + ] = self.get_facial_expressions_list() + self.visemes = [ + "BILABIAL", + "LABIODENTAL", + "INTERDENTAL", + "DENTAL_ALVEOLAR", + "POSTALVEOLAR", + "VELAR_GLOTTAL", + "CLOSE_FRONT_VOWEL", + "OPEN_FRONT_VOWEL", + "MID_CENTRAL_VOWEL", + "OPEN_BACK_VOWEL", + "CLOSE_BACK_VOWEL", + "IDLE", + ] + return + + def send_face_request(self): + """ Send the request to the web page""" + rospy.loginfo("Sending request to webpage of the face") + self.face_pub.publish(self.face_request) + return + + def get_facial_expressions_list(self): + """ Get facial expression list from the resource file""" + facial_expression_list = [] + face_expression_au = {} + base_dir = os.path.dirname(__file__) + with open( + base_dir + "/resource/cordial_face_expression.json", "r" + ) as json_file: + data = json.load(json_file) + for facial_expression in data: + facial_expression_list.append(facial_expression) + au_name = str(facial_expression) + aus = [] + for dofs in data[facial_expression]["dofs"]: + aus.append(str(dofs)) + for keyframe in data[facial_expression]["keyframes"]: + au_degrees = keyframe["pose"] + au_ms = keyframe["time"] + face_expression_au[au_name] = { + "aus": aus, + "au_degrees": au_degrees, + "au_ms": au_ms, + } + return face_expression_au, facial_expression_list + + def get_face_data(self, data): + """ Get the validated data of the face""" + # rospy.loginfo("The face expressions available are %s" % self.face_expression) + + data = ast.literal_eval(data) + if "behavior_data" in data: + behavior_data = ast.literal_eval(data["behavior_data"]) + else: + behavior_data = data + viseme_set = [] + facial_expression = [] + sentence = [] + for b in behavior_data: + if "id" in b.keys(): + if b["id"] in self.visemes: + viseme_set.append(b) + if b["id"] in self.face_expression_names: + facial_expression.append(b) + if "character" in b.keys(): + sentence.append(b["value"]) + # viseme = list(filter(lambda b: b["id"] in self.visemes, data)) + # facial_expression = list(filter(lambda b: b["id"] in self.face_expression_names, data)) + rospy.loginfo("These facial expressions include %s" % facial_expression) + ordered_facial_data = list( + sorted(facial_expression, key=lambda face: face["start"]) + ) + validated_face_expr = [] + for fexp in ordered_facial_data: + validated_face_expr.append(self.face_expression[fexp["id"]]) + viseme_set = [] + rospy.loginfo("The validated facial expressions are %s" % validated_face_expr) + rospy.loginfo("The validated visemes are %s" % viseme_set) + print("Finished getting face data for sentence:", sentence) + return (validated_face_expr, viseme_set) + + +class MouthService(HarmoniServiceManager): + """ + Mouth service + """ + + def __init__(self, name, param): + super().__init__(name) + """ Initialization of variables and face parameters """ + rospy.loginfo("MouthService initializing") + self.name = name + self.min_duration_viseme = param["min_duration_viseme"] + self.speed_viseme = param["speed_viseme"] + self.timer_interval = param["timer_interval"] + self.service_id = hf.get_child_id(self.name) + self.setup_face() + self.face_pub = rospy.Publisher( + ActuatorNameSpace.face.value + self.service_id + "/expressing", + FaceRequest, + queue_size=1, + ) + """Setup the face service as server """ + self.state = State.INIT + return + + def do(self, data): + """Do expression in web face + + Args: + data ([str]): stringified json from tts results + + Returns: + response (int): whether SUCCESS of FAIL + message (str): result message + """ + rospy.loginfo("Do expressions") + self.actuation_completed = False + self.result_msg="" + [valid_face_expression, visemes] = self.get_face_data(data) + try: + self.state = State.REQUEST + + if visemes != []: + viseme_ids = list(map(lambda b: b["id"], visemes)) + viseme_times = list(map(lambda b: b["start"], visemes)) + self.face_request = FaceRequest( + visemes=viseme_ids, viseme_ms=self.speed_viseme, times=viseme_times + ) + + t = Timer(self.timer_interval, self.send_face_request) + t.start() + start_time = rospy.Time.now() + rospy.loginfo("The last viseme lasts %i" % viseme_times[-1]) + time_sleep = int(viseme_times[-1]) + self.min_duration_viseme + rospy.sleep(time_sleep) + if valid_face_expression != []: + rospy.loginfo("Valid face expression not null") + if len(valid_face_expression) > 1: + for ind, f in range(0, len(valid_face_expression) - 1): + rospy.loginfo("The valid expression is %s" % f) + aus = list(map(lambda s: s[2:], f["aus"])) + au_ms = f["au_ms"] * 1000 + self.face_request = FaceRequest( + aus=aus, au_degrees=f["au_degrees"], au_ms=au_ms + ) + t = Timer(self.timer_interval, self.send_face_request) + t.start() + start_time = rospy.Time.now() + aus = list(map(lambda s: s[2:], valid_face_expression[-1]["aus"])) + au_ms = valid_face_expression[-1]["au_ms"] * 1000 + self.face_request = FaceRequest( + aus=aus, + au_degrees=valid_face_expression[-1]["au_degrees"], + au_ms=au_ms, + ) + t = Timer(self.timer_interval, self.send_face_request) + t.start() + start_time = rospy.Time.now() + rospy.loginfo("The last facial expression") + rospy.sleep(valid_face_expression[-1]["au_ms"]) + self.state = State.SUCCESS + self.actuation_completed = True + except Exception: + self.state = State.FAILED + self.actuation_completed = True + rospy.loginfo("Completed Expression") + return {"response": self.state, "message": self.result_msg} + + def setup_face(self): + """Setup the face, waiting for the connection with the web page + """ + rospy.loginfo("Setting up the %s mouth" % self.name) + rospy.loginfo("Checking that face is connected to ROS websocket") + rospy.wait_for_service("/harmoni/actuating/face/is_connected") + rospy.loginfo("Done, face is connected to ROS websocket") + [ + self.face_expression, + self.face_expression_names, + ] = self.get_facial_expressions_list() + self.visemes = [ + "BILABIAL", + "LABIODENTAL", + "INTERDENTAL", + "DENTAL_ALVEOLAR", + "POSTALVEOLAR", + "VELAR_GLOTTAL", + "CLOSE_FRONT_VOWEL", + "OPEN_FRONT_VOWEL", + "MID_CENTRAL_VOWEL", + "OPEN_BACK_VOWEL", + "CLOSE_BACK_VOWEL", + "IDLE", + ] + return + + def send_face_request(self): + """ Send the request to the web page""" + rospy.loginfo("Sending request to webpage of the face") + self.face_pub.publish(self.face_request) + return + + def get_facial_expressions_list(self): + """ Get facial expression list from the resource file""" + facial_expression_list = [] + face_expression_au = {} + base_dir = os.path.dirname(__file__) + with open( + base_dir + "/resource/cordial_face_expression.json", "r" + ) as json_file: + data = json.load(json_file) + for facial_expression in data: + facial_expression_list.append(facial_expression) + au_name = str(facial_expression) + aus = [] + for dofs in data[facial_expression]["dofs"]: + aus.append(str(dofs)) + for keyframe in data[facial_expression]["keyframes"]: + au_degrees = keyframe["pose"] + au_ms = keyframe["time"] + face_expression_au[au_name] = { + "aus": aus, + "au_degrees": au_degrees, + "au_ms": au_ms, + } + return face_expression_au, facial_expression_list + + def get_face_data(self, data): + """ Get the validated data of the face""" + # rospy.loginfo("The face expressions available are %s" % self.face_expression) + data = ast.literal_eval(data) + if "behavior_data" in data: + behavior_data = ast.literal_eval(data["behavior_data"]) + else: + behavior_data = data + viseme_set = [] + facial_expression = [] + sentence = [] + for b in behavior_data: + if "id" in b.keys(): + if b["id"] in self.visemes: + viseme_set.append(b) + if b["id"] in self.face_expression_names: + facial_expression.append(b) + if "character" in b.keys(): + sentence.append(b["value"]) + # viseme = list(filter(lambda b: b["id"] in self.visemes, data)) + + # facial_expression = list(filter(lambda b: b["id"] in self.face_expression_names, data)) + rospy.loginfo("These facial expressions include %s" % facial_expression) + + ordered_facial_data = list( + sorted(facial_expression, key=lambda face: face["start"]) + ) + + validated_face_expr = [] + for fexp in ordered_facial_data: + validated_face_expr.append(self.face_expression[fexp["id"]]) + + for i in range(0, len(viseme_set) - 1): + viseme_set[i]["duration"] = ( + viseme_set[i + 1]["start"] - viseme_set[i]["start"] + ) + + viseme_set[-1]["duration"] = self.min_duration_viseme + + viseme_behaviors = list( + filter(lambda b: b["duration"] >= self.min_duration_viseme, viseme_set) + ) + ordered_visemes = list(sorted(viseme_behaviors, key=lambda b: b["start"])) + rospy.loginfo("The validated facial expressions are %s" % validated_face_expr) + rospy.loginfo("The validated visemes are %s" % viseme_set) + print("Finished getting face data for sentence:", sentence) + return (validated_face_expr, viseme_set) + + + +def main(): + """Set names, collect params, and give service to server""" + service_name = ActuatorNameSpace.face.name + instance_id = rospy.get_param("/instance_id") + service_id_mouth = f"{service_name}_mouth_{instance_id}" + service_id_eyes = f"{service_name}_eyes_{instance_id}" + try: + rospy.init_node(service_name) + param_eyes = rospy.get_param(service_name + "/" + instance_id + "_param/eyes/") + param_mouth = rospy.get_param(service_name + "/" + instance_id + "_param/mouth/") + s_eyes = EyesService(service_name + "_eyes_" + instance_id, param_eyes) + s_mouth = MouthService(service_name + "_mouth_" + instance_id, param_mouth) + service_server_eyes = HarmoniServiceServer(service_id_eyes, s_eyes) + service_server_mouth = HarmoniServiceServer(service_id_mouth, s_mouth) + service_server_eyes.start_sending_feedback() + service_server_mouth.start_sending_feedback() + rospy.spin() + except rospy.ROSInterruptException: + pass + + +if __name__ == "__main__": + main() diff --git a/harmoni_actuators/harmoni_face/test/face.test b/harmoni_actuators/harmoni_face/test/face.test index db4905b1..700bb1e2 100644 --- a/harmoni_actuators/harmoni_face/test/face.test +++ b/harmoni_actuators/harmoni_face/test/face.test @@ -8,6 +8,6 @@ - + - + \ No newline at end of file diff --git a/harmoni_actuators/harmoni_face/test/face_misty.test b/harmoni_actuators/harmoni_face/test/face_misty.test new file mode 100644 index 00000000..c406d049 --- /dev/null +++ b/harmoni_actuators/harmoni_face/test/face_misty.test @@ -0,0 +1,15 @@ + + + + + + + + + + + + + + + diff --git a/harmoni_actuators/harmoni_face/web/index.html b/harmoni_actuators/harmoni_face/web/index.html index 6b5b334b..73aee203 100755 --- a/harmoni_actuators/harmoni_face/web/index.html +++ b/harmoni_actuators/harmoni_face/web/index.html @@ -12,7 +12,7 @@ - - + + diff --git a/harmoni_actuators/harmoni_face/web/src/js/face.js b/harmoni_actuators/harmoni_face/web/src/js/face.js index 485772fb..36f34c55 100755 --- a/harmoni_actuators/harmoni_face/web/src/js/face.js +++ b/harmoni_actuators/harmoni_face/web/src/js/face.js @@ -742,7 +742,7 @@ function addEyes(white_color, iris_color, size, height, separation, iris_size, p } var x_adj = two.width / 2// (separation)//*(size/camera_depth); - var y_adj = two.height / 2//height //* (size/camera_depth); + var y_adj = two.height / 2 //height //* (size/camera_depth); reyewhite = two.makeCircle(x_adj + separation / 2, y_adj - height, size); reyewhite.fill = white_color diff --git a/harmoni_actuators/harmoni_face/web/src/js/face_misty.js b/harmoni_actuators/harmoni_face/web/src/js/face_misty.js new file mode 100755 index 00000000..d38c9f36 --- /dev/null +++ b/harmoni_actuators/harmoni_face/web/src/js/face_misty.js @@ -0,0 +1,1136 @@ +var listener, ros, two; +var container, stats; + +var camera, scene, renderer; +var tweens; + +var text, plane; + +var parts; +var aus_l; +var aus_r; +var n_aus = 43; + +var upperLipControlPoints = []; lowerLipControlPoints = []; +var lBrowControlPoints = []; rBrowControlPoints = []; + +var lockIdleUntil = 0; //variable to lock idling, if a behavioral blink is occuring + +var targetRotation = 0; +var targetRotationOnMouseDown = 0; + +var mouseX = 0; +var mouseXOnMouseDown = 0; + +var windowHalfX = window.innerWidth / 2; +var windowHalfY = window.innerHeight / 2; + +var camera_depth = 550; + +// Idle behavior +var last_blink; +var last_glance; +var looking; +var poked, poke_end, poke_time; +var blinking, blink_end, blink_time; + +var background_color; +var cm_per_px; +var ros_master_uri; +var viseme_adjustment; +var viseme_buffer, viseme_time_buffer, viseme_dur, viseme_start; +var startup_time, prev_frame_time; + + +/* +This is the first function that is called by the HTML Script. +It is responsible for the creation of all the face parts, and defines color, sizes, orientations, etc. +**See kiwi.html or chili.html for types of the input arguments.** +*/ +function startFace( + bkgd_color, + ros_uri, + cm_per_pixel, + viseme_adj, + eye_white_color, + eye_iris_color, + eye_size, + eye_height, + eye_separation, + eye_iris_size, + eye_pupil_scale, + eye_pupil_shape, + eyelid_width, + eyelid_height, + eyelid_arch, + nose_color, + nose_x, + nose_y, + nose_width, + nose_height, + mouth_color, + mouth_x, + mouth_y, + mouth_width, + mouth_height, + mouth_thickness, + mouth_opening, + mouth_dimple_size, + upper_lip_height_scale, + lower_lip_height_scale, + brow_color, + brow_width, + brow_height, + brow_thickness, + brow_innersize) { + + d = new Date(); + startup_time = d.getTime() + prev_frame_time = startup_time + viseme_buffer = [] + viseme_time_buffer = [] + poked = false + poke_start_time = 0 + + parts = [] + + background_color = bkgd_color; + cm_per_px = cm_per_pixel; + ros_master_uri = ros_uri; + + + container = document.createElement('div'); + container.setAttribute("id", "face-div"); + document.body.appendChild(container); + viseme_adjustment = viseme_adj + + var elem = document.getElementById('face-div'); + var params = { fullscreen: true }; + two = new Two(params).appendTo(elem); + + + var rect = two.makeRectangle(windowHalfX, windowHalfY, window.innerWidth, window.innerHeight); + rect.fill = background_color; + rect.stroke = 'None' + + + // addEyes(white_color, iris_color, size, height, separation, iris_size, pupil_scale, pupil_shape) + addEyes(eye_white_color, + eye_iris_color, + eye_size, + eye_height, + eye_separation, + eye_iris_size, + eye_pupil_scale, + eye_pupil_shape); + + // addLids(color, width, height, arch) + addLids(background_color, + eyelid_width, + eyelid_height, + eyelid_arch) + + // addNose(color, x, y, width, height) + addNose(nose_color, + nose_x, + nose_y, + nose_width, + nose_height) + + // addMouth(color, x, y, width, height, thickness, opening, dimple_size, ulip_h_scale, llip_h_scale) + addMouth(mouth_color, + mouth_x, + mouth_y, + mouth_width, + mouth_height, + mouth_thickness, + mouth_opening, + mouth_dimple_size, + upper_lip_height_scale, + lower_lip_height_scale) + + // addBrows(color, width, height, thickness, arch) + addBrows(brow_color, + brow_width, + brow_height, + brow_thickness, + brow_innersize) + + last_blink = 0; + last_gaze = 0; + looking = false; + + aus_l = [] + aus_r = [] + for (i = 0; i <= n_aus + 1; i++) { + aus_l.push(0) + aus_r.push(0) + } + + document.addEventListener('mousedown', onDocumentMouseDown, false); + document.addEventListener('touchstart', onDocumentTouchStart, false); + document.addEventListener('touchmove', onDocumentTouchMove, false); + window.addEventListener('resize', onWindowResize, false); + + // Connecting to ROS + // ----------------- + + if (ros_master_uri == "") { + ros_master_uri = "ws://" + location.hostname + ":9000" + } + console.log("ROS master URI: " + ros_master_uri) + + ros = new ROSLIB.Ros({ + url: ros_master_uri + }); + + ros.on('connection', function () { + console.log('Connected to websocket server.'); + }); + + ros.on('error', function (error) { + console.log('Error connecting to websocket server: ', error); + reload_page_to_retry_connecting(2); + }); + + ros.on('close', function () { + console.log('Connection to websocket server closed.'); + reload_page_to_retry_connecting(2); + }); + + // Setup ROS network + // ---------------------- + + listener = new ROSLIB.Topic({ + ros: ros, + name: '/harmoni/actuating/face/default/expressing', + messageType: 'harmoni_face/FaceRequest' + }); + listener.subscribe(get_goal); + + is_connected_client = new ROSLIB.Service({ + ros: ros, + name: '/harmoni/actuating/face/is_connected', + serviceType: 'std_srvs/Trigger' + }); + + is_connected_client.advertise(function (_, response) { + console.log('is_connected_client received service request'); + response['success'] = true; + response['message'] = 'Face is connected'; + return true; + }); + + zeroFace(5) + //finally, start the animation loop. + two.bind('update', animate).play(); +} + +function reload_page_to_retry_connecting(wait_seconds) { + sleep(wait_seconds).then(function () { + document.location.reload(true); + }); +} + +function sleep(seconds) { + return new Promise(function (resolve) { + setTimeout(resolve, seconds * 1000); + }); +} + +/** +Receives and processes the FaceRequest Message sent from the face_keyframe_server +message - a FaceRequest Message, which contains visemes and/or aus to send to the face. + the message also contains timing information and magnitude information. +**/ +function get_goal(message) { + if (message.visemes.length != 0) { + console.log('Message received: visemes: ' + message.visemes); + console.log(message) + stop_visemes() + // play_visemes(message.visemes, message.viseme_ms, message.times, message.start) + play_visemes(message.visemes, 35, message.times, message.start) + } + + if (message.aus.length != 0) { + console.log('Message received: aus: ' + message.aus + " degrees: " + message.au_degrees + " side: " + message.side) + for (a in message.aus) { + this_au = message.aus[a] + console.log(this_au) + if (message.au_ms[a] < 0) { + console.log("Time cannot be less than zero!") + + } else if (this_au == 'ZeroFace') { + zeroFace(message.au_ms) + console.log('clearing AUS') + + } else { + //if the au is unilateral, it will have an 'r' or 'l' at the end to indicate the side to move + if (this_au.slice(-1) == 'l' || this_au.slice(-1) == 'r') { + au(parseInt(this_au.slice(0, -1)), message.au_degrees[a], this_au.slice(-1)) + } + //otherwise assume bilateral movement + else { + au(parseInt(this_au), message.au_degrees[a]) + } if (parseInt(this_au) == 43) { + lockIdleUntil = new Date().getTime() + message.au_ms + 500 + console.log(new Date().getTime()) + console.log(lockIdleUntil) + } + } + + } + move_face(message.au_ms) + } + if (message.hold_gaze == 1) { + looking = true + } + if (message.hold_gaze == 2) { + looking = false + } + if (message.retarget_gaze) { + console.log("Message received: gaze: " + message.gaze_target.x + "," + message.gaze_target.y + "," + message.gaze_target.z) + x = message.gaze_target.x + y = message.gaze_target.y + z = message.gaze_target.z + gaze_vel = message.gaze_vel + if (gaze_vel > 0) { + lookat_real_world(x, y, z, gaze_vel) + } + else { + lookat_real_world(x, y, z, 1.7) //1.7 rad/s is an average human eye saccade speed + } + } +} + +/* +Checks to see if any visemes have been added to the buffer. +If they have, it then plays the viseme for the indicated time +*/ +function check_and_play_visemes() { + if (viseme_buffer.length == 0) { return } + d = new Date(); + elapsed = d.getTime() - viseme_start + viseme_dur; + elapsed = elapsed / 1000.0 + if (viseme_time_buffer[0] > elapsed) { return }; + while (viseme_time_buffer[0] < elapsed) { + play_viseme = viseme_buffer.shift() + viseme_time = viseme_time_buffer.shift() + } + // console.log(elapsed, viseme_time) + console.log(play_viseme, viseme_dur) + viseme(play_viseme, viseme_dur); +} + +/* +adds a viseme information to the buffers. +visemes - a string representing the type of viseme to play +time_per - the length of the viseme to play +times - the time at which the viseme should be played +*/ +function play_visemes(visemes, time_per, times) { + viseme_buffer = visemes + console.log(times) + for (time in times) { + times[time] -= .035; + } + console.log(times) + viseme_time_buffer = times + viseme_dur = time_per + d = new Date(); + viseme_start = d.getTime(); +} + +/* +Stops the face from speaking +*/ +function stop_visemes() { + please_stop = true +} + +/* +Sets AU on a certain side to the given degree. +id - an integer representing the AU to change +degree - a float from 0 to 1 to control the intensity of the AU +side - a string indicating the side of the AU to modify (does both if left blank) +*/ +function au(id, degree, side) { + + if (side == "r") { + aus_r[id] = degree + } else if (side == "l") { + aus_l[id] = degree + } else { + aus_l[id] = degree + aus_r[id] = degree + } +} + +/* +Zeros all the AU's and moves the face to its neutral position. +t - time to move in milliseconds +*/ +function zeroFace(t) { + for (a in aus_l) { + au(a, 0) + } + move_face(t) +} + +/* +Sets all AUs to One and moves to it. +t - time to move in milliseconds +*/ +function oneFace(t) { + for (a in aus_l) { + au(a, 1.0) + } + move_face(t) +} + +/* +zeros an array of aus, and moves to the face. +to_zero - an array of integers representing the number of the aus to zero +t - time to move to the face (in milliseconds) +*/ +function zero_aus(to_zero, t) { + for (a in to_zero) { + au(to_zero[a], 0) + } + move_face(t) +} + +/* +Same as above, but it does not move the face. +*/ +function zero_aus_no_move(to_zero) { + for (a in to_zero) { + au(to_zero[a], 0) + } +} + +/* +Calculates the positions of all the facial features based on the current value of all the aus +t - float representing the milliseconds it should take to move to the current AU set +notViseme - flag to determine if the movement is a viseme. If it is, we only will modify the mouth shape +*/ +function move_face(t, notViseme) { + if (notViseme == undefined) { + notViseme = true + } + + if (notViseme) { + // ***** BROWS ***** (AU 1, 2, 4) + lbrow = getPart("lbrow") + rbrow = getPart("rbrow") + + var max_x = lbrow.idleControlPoint.x1 / 2 + var max_y = lbrow.idle_pos.y / 3 + + lInner = { x: lbrow.idleControlPoint.x0, y: lbrow.idleControlPoint.y0 } + rInner = { x: rbrow.idleControlPoint.x0, y: rbrow.idleControlPoint.y0 } + + lInner.y -= max_y * (1.5 * aus_l[1] + .1 * aus_l[2] - 1.2 * aus_l[4]) + lInner.x -= max_x * (0.15 * aus_l[1] + 0.5 * aus_l[4]) + rInner.y -= max_y * (1.5 * aus_r[1] + .1 * aus_r[2] - 1.2 * aus_r[4]) + rInner.x += max_x * (0.15 * aus_r[1] + 0.5 * aus_r[4]) + + lMid = { x: lbrow.idleControlPoint.x1, y: lbrow.idleControlPoint.y1 } + rMid = { x: rbrow.idleControlPoint.x1, y: rbrow.idleControlPoint.y1 } + + lMid.y -= max_y * (0.1 * aus_l[1] + aus_l[2] - 0.6 * aus_l[4]) + lMid.x -= max_x * (aus_l[1] + -0.5 * aus_l[2] + aus_l[4]) + rMid.y -= max_y * (0.1 * aus_r[1] + aus_r[2] - 0.6 * aus_r[4]) + rMid.x += max_x * (aus_r[1] + -0.5 * aus_r[2] + aus_r[4]) + + lOuter = { x: lbrow.idleControlPoint.x2, y: lbrow.idleControlPoint.y2 } + rOuter = { x: rbrow.idleControlPoint.x2, y: rbrow.idleControlPoint.y2 } + + lOuter.y -= max_y * (0.5 * aus_l[2] + 0.1 * aus_l[4]) + lOuter.x -= max_x * (-.2 * aus_l[2] + 0.2 * aus_l[4]) + rOuter.y -= max_y * (0.5 * aus_r[2] + 0.1 * aus_r[4]) + rOuter.x += max_x * (-.2 * aus_r[2] + 0.2 * aus_l[4]) + + + rBrow = { x0: rOuter.x, y0: rOuter.y, x1: rMid.x, y1: rMid.y, x2: rInner.x, y2: rInner.y } + lBrow = { x0: lOuter.x, y0: lOuter.y, x1: lMid.x, y1: lMid.y, x2: lInner.x, y2: lInner.y } + + rbrow.interpolateBrows(rBrow, t); + lbrow.interpolateBrows(lBrow, t); + + // ***** EYELIDS ****** + urlid_p = urlid.idle_pos.y; + lrlid_p = lrlid.idle_pos.y; + ullid_p = ullid.idle_pos.y; + lllid_p = lllid.idle_pos.y; + + var eyeSize = getPart('reye').size + + r_eye_width = eyeSize//(urlid_p-lrlid_p)+urlid.threedee.scale.y*lid_width+lrlid.threedee.scale.y*lid_width; + l_eye_width = eyeSize//(ullid_p-lllid_p)+ullid.threedee.scale.y*lid_width+lllid.threedee.scale.y*lid_width; + + openr = -.5 * aus_r[5] + openl = -.5 * aus_l[5] + + closer = .6 * aus_r[7] + closel = .6 * aus_l[7] + + closurer = (openr + closer) + aus_r[43] * (1 - (openr + closer)) + closurel = (openl + closel) + aus_l[43] * (1 - (openl + closel)) + + urlid.pos({ y: urlid_p + r_eye_width / 2 * (closurer) }, t); + lrlid.pos({ y: lrlid_p - r_eye_width / 2 * (closurer) }, t); + ullid.pos({ y: ullid_p + l_eye_width / 2 * (closurel) }, t); + lllid.pos({ y: lllid_p - l_eye_width / 2 * (closurel) }, t); + + // ***** NOSE ***** + + // nose wrinkle (raise) (au 9) + wrinkle_dist = 30 * aus_l[9]; + nose = getPart("nose"); + nose.pos({ y: nose.idle_pos.y + wrinkle_dist }, t); + + // nose width (aus 38,39) + width_scale = 1 + .35 * aus_l[38] - .35 * aus_l[39]; + nose = getPart("nose"); + // nose.scale({x:nose.idle_scale.x*width_scale}, t); + + } + // ***** MOUTH ***** + ulip = getPart("ulip"); + llip = getPart("llip"); + + max_up_dist = (nose.idle_pos.y - ulip.idle_pos.y) / 1.5 + max_down_dist = (nose.idle_pos.y - llip.idle_pos.y) / 1.5 + max_x_variation = (ulip.idleControlPoint.x0 - ulip.idleControlPoint.x3) / 4; //should be width divided by 2 I think + + lcorner = { x: ulip.idleControlPoint.x3, y: ulip.idleControlPoint.y3 } + rcorner = { x: ulip.idleControlPoint.x0, y: ulip.idleControlPoint.y0 } + + lcorner.x += max_x_variation * (.2 * aus_l[12] + .05 * aus_l[13] + .25 * aus_l[14] - .1 * aus_l[26] - .3 * aus_l[27] + .35 * aus_l[17] - .7 * aus_l[18] + .25 * aus_l[20] - .2 * aus_l[23] - .1 * aus_l[24]) / 1.1 + lcorner.y += max_down_dist * (-.2 * aus_l[25] - .2 * aus_l[26] + .7 * aus_l[13] - 1.5 * aus_l[15] - .5 * aus_l[27] - .2 * aus_l[20] - .3 * aus_l[23] - .5 * aus_l[24]) / 3.4 + rcorner.x -= max_x_variation * (.2 * aus_r[12] + .05 * aus_r[13] + .25 * aus_r[14] - .1 * aus_r[26] - .3 * aus_r[27] + .35 * aus_r[17] - .7 * aus_r[18] + .25 * aus_r[20] - .2 * aus_r[23] - .1 * aus_r[24]) / 1.1 + rcorner.y += max_down_dist * (-.2 * aus_r[25] - .2 * aus_r[26] + .7 * aus_r[13] - 1.5 * aus_r[15] - .5 * aus_r[27] - .2 * aus_r[20] - .3 * aus_r[23] - .5 * aus_r[24]) / 3.4 + + upperl = { x: ulip.idleControlPoint.x2, y: ulip.idleControlPoint.y2 } + upperr = { x: ulip.idleControlPoint.x1, y: ulip.idleControlPoint.y1 } + + upperl.x += max_x_variation * (.55 * aus_l[10] + .25 * aus_l[14] - .4 * aus_l[18] + .25 * aus_l[20] - .1 * aus_l[23]) / 1.05 + upperl.y += max_up_dist * (.1 * aus_l[25] + .3 * aus_l[26] + .6 * aus_l[27] + .55 * aus_l[10] + .35 * aus_l[17]) / 2.2 + upperr.x -= max_x_variation * (.55 * aus_r[10] + .25 * aus_r[14] - .4 * aus_r[18] + .25 * aus_r[20] - .1 * aus_r[23]) / 1.05 + upperr.y += max_up_dist * (.1 * aus_r[25] + .3 * aus_r[26] + .6 * aus_r[27] + .55 * aus_r[10] + .35 * aus_r[17]) / 2.2 + + lowerl = { x: llip.idleControlPoint.x2, y: llip.idleControlPoint.y2 } + lowerr = { x: llip.idleControlPoint.x1, y: llip.idleControlPoint.y1 } + + lowerl.x += max_x_variation * (.25 * aus_l[14] + .5 * aus_l[16] + .2 * aus_l[26] - .4 * aus_l[18] + .25 * aus_l[20] - .2 * aus_l[23]) / 1.05 + lowerl.y += max_down_dist * (-.4 * aus_l[25] - .7 * aus_l[26] - 1.6 * aus_l[27] + .55 * aus_l[10] - .2 * aus_l[16] + .45 * aus_l[17]) / 2.2 + lowerr.x -= max_x_variation * (.25 * aus_r[14] + .5 * aus_r[16] + .2 * aus_r[26] - .4 * aus_r[18] + .25 * aus_r[20] - .2 * aus_r[23]) / 1.05 + lowerr.y += max_down_dist * (-.4 * aus_r[25] - .7 * aus_r[26] - 1.6 * aus_r[27] + .55 * aus_r[10] - .2 * aus_r[16] + .45 * aus_r[17]) / 2.2 + + upperLip = { x0: rcorner.x, y0: rcorner.y, x1: upperr.x, y1: upperr.y, x2: upperl.x, y2: upperl.y, x3: lcorner.x, y3: lcorner.y } + lowerLip = { x0: rcorner.x, y0: rcorner.y, x1: lowerr.x, y1: lowerr.y, x2: lowerl.x, y2: lowerl.y, x3: lcorner.x, y3: lcorner.y } + + ulip.interpolateLips(upperLip, t); + llip.interpolateLips(lowerLip, t); +} + +/* +Sets the AUs for the respective viseme. +viseme_name - a string that explains the viseme +t - a float to represent the time to move the face to the new position +*/ +function viseme(viseme_name, t) { + console.log("Viseme: " + viseme_name) + switch (viseme_name) { + + // --------------- CONSONANTS ---------------------// + + // M,B,P -> My, Buy, Pie + case 'BILABIAL': + zero_aus_no_move([10, 13, 14, 16, 18, 20, 23, 24, 25, 26, 27]) + au(23, .75) + au(14, .25) + au(24, .7) + + move_face(t, false) + break; + + // F,V -> oFFer, Vest + case "LABIODENTAL": + zero_aus_no_move([10, 13, 14, 16, 18, 20, 23, 24, 25, 26, 27]) + au(10, 0.5); + au(20, 0.4); + au(25, .8); + + move_face(t, false) + break; + + // TH, TH - THin, THis + case "INTERDENTAL": + zero_aus_no_move([10, 13, 14, 16, 18, 20, 23, 24, 25, 26, 27]) + au(10, .6) + au(18, .75) + au(25, .5) + + move_face(t, false) + break; + + // L,T,D,Z,S,N -> Light, Top, DaD, Zebra, Sad, Nope + case "DENTAL_ALVEOLAR": + zero_aus_no_move([10, 13, 14, 16, 18, 20, 23, 24, 25, 26, 27]) + au(25, .65) + + move_face(t, false) + break; + + // R,SH,ZH,CH -> Red, SHould, aSia, CHart + case "POSTALVEOLAR": + zero_aus_no_move([10, 13, 14, 16, 18, 20, 23, 24, 25, 26, 27]) + au(10, .75) + au(18, 1) + au(25, 1) + + move_face(t, false) + break; + + // K,G,NG -> Cat, Game, thiNG + case "VELAR_GLOTTAL": + zero_aus_no_move([10, 13, 14, 16, 18, 20, 23, 24, 25, 26, 27]) + au(10, .6) + // au(18,.5) + au(26, .5) + + move_face(t, false) + break; + + // ------------------ VOWELS ------------------------// + // EE, I -> flEEce, bIt + case "CLOSE_FRONT_VOWEL": + zero_aus_no_move([10, 13, 14, 16, 18, 20, 23, 24, 25, 26, 27]) + au(26, 1) + au(20, 1) + au(10, .4) + move_face(t, false) + break; + + // OO -> bOOt + case "CLOSE_BACK_VOWEL": + zero_aus_no_move([10, 13, 14, 16, 18, 20, 23, 24, 25, 26, 27]) + au(10, .5) + au(13, .8) + au(16, .6) + au(18, 1) + au(23, 1) + au(24, 1) + au(25, 1) + au(26, .4) + + move_face(t, false) + + break; + + // schwa -> ArenA + case "MID_CENTRAL_VOWEL": + zero_aus_no_move([10, 13, 14, 16, 18, 20, 23, 24, 25, 26, 27]) + au(26, 1) + au(25, .5) + au(23, 1) + + move_face(t, false) + break; + + // AE,AU,A,AY,EH -> trAp, mOUth, fAther, fAce, drEss + case "OPEN_FRONT_VOWEL": + zero_aus_no_move([10, 13, 14, 16, 18, 20, 23, 24, 25, 26, 27]) + au(14, 1) + au(20, 1) + au(25, .7) + au(26, .75) + + move_face(t, false) + break; + + // AW,OI,O -> thOUght, chOIce, gOAt + case "OPEN_BACK_VOWEL": + zero_aus_no_move([10, 13, 14, 16, 18, 20, 23, 24, 25, 26, 27]) + au(26, .5) + au(27, 1) + + move_face(t, false) + break; + + case "IDLE": + zeroFace(t) + break; + } +} + +/* +Create a new face part object that tracks the metadata for the THREE.Object3D, which is the part that is actually drawn. +Lip objects are created with a different constructor that enables control points rather than scale, rotate, translate +name - a string that represents the name of the face part +x - float representing x offset +y - float representing y offset +z - float representing z offset +*/ +function facePart(name, group, x, y) { + this.name = name; + this.idle_pos = { x: x, y: y }; + this.idle_rot = 0; + this.idle_scale = 1; + //below MUST be set after assembling the part + this.idle_size = { x: 0, y: 0, z: 0 }; + + this.group = group + this.group.translation.set(x, y) + + this.group.id = name + parts.push(this) + // scene.add(this.threedee) + this.rot = function (goal, t) { + TWEEN.remove(this.group.rotation); + + var goal = goal; + var target = this.group.rotation; + var tween = new TWEEN.Tween(target, { override: true }).to(goal, t); + tween.easing(TWEEN.Easing.Quadratic.InOut); + tween.start(); + } + + this.pos = function (goal, t) { + TWEEN.remove(this.group.translation); + + var goal = goal; + var target = this.group.translation; + var tween = new TWEEN.Tween(target, { override: true }).to(goal, t); + tween.easing(TWEEN.Easing.Quadratic.InOut); + tween.start(); + } + + this.scale = function (goal, t) { + TWEEN.remove(this.group.scale); + + var goal = goal; + var target = this.group.scale; + var tween = new TWEEN.Tween(target, { override: true }).to(goal, t); + tween.easing(TWEEN.Easing.Quadratic.InOut); + tween.start(); + } + +} + + +/* +initializes the eye objects. Handles the placement within the facePart reference frame. +Eyes are spheres with circular irises, circular pupils and a circular catchlight (the white shine) +TODO: put the cat pupils in I guess +*/ +function addEyes(white_color, iris_color, size, height, separation, iris_size, pupil_scale, pupil_shape) { + + function makeEyeGroup() { + var iris = two.makeCircle(0, 0, iris_size); + var pupil = two.makeCircle(0, 0, iris_size * pupil_scale); + var eyeShine = two.makeCircle(- iris_size * pupil_scale / 2, - iris_size * pupil_scale / 1.6, iris_size * pupil_scale / 3); + + + iris.fill = iris_color + iris.stroke = 'None' + pupil.fill = 'black' + + return two.makeGroup(iris, pupil, eyeShine) + } + + var x_adj = two.width / 2// (separation)//*(size/camera_depth); + var y_adj = two.height * 3/ 4 //height //* (size/camera_depth); + + reyewhite = two.makeCircle(x_adj + separation / 2, y_adj - height, size); + reyewhite.fill = white_color + reyewhite.stroke = 'None' + + reye = new facePart("reye", makeEyeGroup(), x_adj + separation / 2, y_adj - height)//-(separation/2)-x_adj, y_adj); + reye.separation = separation + reye.size = size + + + leyewhite = two.makeCircle(x_adj - separation / 2, y_adj - height, size); + leyewhite.fill = white_color + leyewhite.stroke = 'None' + + leye = new facePart("leye", makeEyeGroup(), x_adj - separation / 2, y_adj - height); + leye.separation = separation + leye.size = size +} + +/* +Adds the lid objects. These are actually very large rectangles with a "U" cutout. +Blinking moves them in the y direction. +*/ +function addLids(color, width, height, arch) { + //make the shape for an upper lid (basically a big rectunguloid with a u-shaped cutout) + function makeUpperLidGroup() { + var curve = two.makeCurve(-1.1 * width, -2 * width, -1.1 * width, -1 * width, -1.1 * width, 0, -1.1 * width, width, -width, width, -width / 2, width / 2, width / 2, width / 2, width, width, 1.1 * width, width, 1.1 * width, 0, 1.1 * width, -1 * width, 1.1 * width, -2 * width) + curve.fill = background_color + curve.stroke = 'None' + return two.makeGroup(curve) + } + + var leye = getPart("leye"); + var reye = getPart("reye"); + + var xl = leye.group.translation.x + var xr = reye.group.translation.x + + var y = reye.group.translation.y - height //+ (camera_depth*leye.threedee.position.y/(camera_depth-leye.threedee.position.z)); + + ullid = new facePart("ullid", makeUpperLidGroup(), xl, y) + + urlid = new facePart("urlid", makeUpperLidGroup(), xr, y) + + //make the shapes for the lower lid (same as upper lid, but in reverse, and with two rectangles to stop sharp corners) + function makeLowerLidGroup() { + var curve = two.makeCurve(1.1 * width, 2 * width, 1.1 * width, 1 * width, 1.1 * width, 0, 1.1 * width, -width, width, -width, width / 2, -width / 2, -width / 2, -width / 2, -width, -width, -1.1 * width, -width, -1.1 * width, 0, -1.1 * width, 1 * width, -1.1 * width, 2 * width) + var rect = two.makeRectangle(-0.9 * width, -width, width / 3, 2 * width) + var rect2 = two.makeRectangle(0.9 * width, -width, width / 3, 2 * width) + rect.fill = background_color + rect2.fill = background_color + curve.fill = background_color + curve.stroke = 'None' + rect.stroke = 'None' + rect2.stroke = 'None' + return two.makeGroup(rect, rect2, curve) + } + + y = reye.group.translation.y + height + + lllid = new facePart("lllid", makeLowerLidGroup(), xl, y) + + lrlid = new facePart("lrlid", makeLowerLidGroup(), xr, y) +} + +/* +Initializes the nose object. Its shape is an upside down triangle of size width and height +*/ +function addNose(color, x, y, width, height) { + var xc = width / 2; + var yc = height / 2 + + function makeNoseGroup() { + var triangle = two.makePath(-xc, yc, xc, yc, 0, -yc, -xc, yc) + triangle.fill = color + triangle.stroke = 'None' + return two.makeGroup(triangle) + } + + nose = new facePart("nose", makeNoseGroup(), windowHalfX + x, windowHalfY - y); + +} + +/* +Adds both brow objects. The brows are controlled by three points and taper at the ends +TODO: add a parameter for the height of the brow arch +*/ +function addBrows(color, width, height, thickness, innerSize) { + + function makeBrowGroup(side) { + var sign = side == 'left' ? 1 : -1 + var browCurve = two.makeCurve(sign * width, 0, sign * width / 8, -0.75 * thickness, -sign * width, thickness, true) + browCurve.fill = 'None' + browCurve.linewidth = thickness + browCurve.stroke = color + browCurve.cap = 'round' + return two.makeGroup(browCurve) + } + function getIdleCoords(side) { + var sign = side == 'left' ? 1 : -1 + return { x2: -sign * width, y2: thickness, x1: sign * width / 8, y1: -0.75 * thickness, x0: sign * width, y0: 0 } + } + + function interpolateBrows(goal, t) { + TWEEN.remove(this.currentControlPoint); + var goal = goal; + var target = this.currentControlPoint; + var tween = new TWEEN.Tween(target, { override: true }).to(goal, t); + tween.easing(TWEEN.Easing.Quadratic.InOut); + var browInfo = this.group.children[0] + tween.onUpdate(function () { + + browInfo.vertices[0].x = this.x0 + browInfo.vertices[0].y = this.y0 + + browInfo.vertices[1].x = this.x1 + browInfo.vertices[1].y = this.y1 + + browInfo.vertices[2].x = this.x2 + browInfo.vertices[2].y = this.y2 + }); + tween.start(); + } + + var leye = getPart("leye"); + var reye = getPart("reye"); + + var xl = leye.group.translation.x + var xr = reye.group.translation.x + + var y = reye.group.translation.y - height + + lbrow = new facePart("lbrow", makeBrowGroup('left'), xl, y) + lbrow.idleControlPoint = getIdleCoords('left') + lbrow.currentControlPoint = getIdleCoords('left') + lbrow.interpolateBrows = interpolateBrows + + rbrow = new facePart("rbrow", makeBrowGroup('right'), xr, y); + rbrow.idleControlPoint = getIdleCoords('right') + rbrow.currentControlPoint = getIdleCoords('right') + rbrow.interpolateBrows = interpolateBrows +} + +/* +Initializes the mouth object, which consists of two lip objects. +The lip objects each consist of 4 control points: two mouth corners and two intermediate control points. +The lips share the same lip corners +*/ +function addMouth(color, x, y, width, height, thickness, opening, dimple_size, ulip_h_scale, llip_h_scale) { + + function makeLipGroup() { + var lipCurve = two.makeCurve(width / 2, 0, width / 8, height, -width / 8, height, -width / 2, 0, true) + lipCurve.fill = 'None' + lipCurve.linewidth = thickness + lipCurve.stroke = color + lipCurve.cap = 'round' + return two.makeGroup(lipCurve) + } + + function interpolateLips(goal, t) { + TWEEN.remove(this.currentControlPoint); + + var goal = goal; + var target = this.currentControlPoint; + var tween = new TWEEN.Tween(target, { override: true }).to(goal, t); + tween.easing(TWEEN.Easing.Quadratic.InOut); + var mouthInfo = this.group.children[0] + tween.onUpdate(function () { + mouthInfo.vertices[0].x = this.x0 + mouthInfo.vertices[0].y = this.y0 + + mouthInfo.vertices[1].x = this.x1 + mouthInfo.vertices[1].y = this.y1 + + mouthInfo.vertices[2].x = this.x2 + mouthInfo.vertices[2].y = this.y2 + + mouthInfo.vertices[3].x = this.x3 + mouthInfo.vertices[3].y = this.y3 + }); + tween.start(); + } + + llip = new facePart('llip', makeLipGroup(), windowHalfX + x, windowHalfY - y) + llip.idleControlPoint = { x0: width / 2, y0: 0, x1: width / 5, y1: height, x2: -width / 5, y2: height, x3: -width / 2, y3: 0 } + llip.currentControlPoint = { x0: width / 2, y0: 0, x1: width / 5, y1: height, x2: -width / 5, y2: height, x3: -width / 2, y3: 0 } + llip.interpolateLips = interpolateLips + + ulip = new facePart('ulip', makeLipGroup(), windowHalfX + x, windowHalfY - y) + ulip.idleControlPoint = { x0: width / 2, y0: 0, x1: width / 5, y1: height, x2: -width / 5, y2: height, x3: -width / 2, y3: 0 } + ulip.currentControlPoint = { x0: width / 2, y0: 0, x1: width / 5, y1: height, x2: -width / 5, y2: height, x3: -width / 2, y3: 0 } + ulip.interpolateLips = interpolateLips + +} + +/* +returns the part from the list of parts in the scene based on name. +*/ +function getPart(name) { + for (i in parts) { + if (parts[i].name == name) { + return parts[i]; + } + } + return null; +} + +/* +looks at a given point in the x,y,z space +x,y,z - floats representing to coordinates of the point to look at +t - float representing time to move to the location +*/ +function lookat(x, y, z, t) { + + var leye = getPart("leye") + + var ygoal = leye.idle_pos.y - (y * leye.size) / z + var xoffset = (leye.size * (leye.separation / 2 + x)) / (Math.sqrt(Math.pow(z, 2) + Math.pow(leye.separation / 2 + x, 2))) + var xgoal = leye.idle_pos.x + xoffset + + leye.pos({ x: xgoal, y: ygoal }, t); + + var reye = getPart("reye") + + xoffset = (reye.size * (reye.separation / 2 - x)) / (Math.sqrt(Math.pow(z, 2) + Math.pow(reye.separation / 2 - x, 2))) + xgoal = reye.idle_pos.x - xoffset + //ygoal is same for both eyes + + reye.pos({ x: xgoal, y: ygoal }, t); +} + +// for looking at things IRL; use avg velocity rather than time +// velocity is in radians/sec +function lookat_real_world(realx, realy, realz, vel) { + x = realx / cm_per_px; + y = realy / cm_per_px; + z = realz / cm_per_px; + + var leye = getPart("leye") + + var ygoal = leye.idle_pos.y - (y * leye.size) / z + var lxoffset = (leye.size * (leye.separation / 2 + x)) / (Math.sqrt(Math.pow(z, 2) + Math.pow(leye.separation / 2 + x, 2))) + var lxgoal = leye.idle_pos.x + lxoffset + + var reye = getPart("reye") + + var rxoffset = (reye.size * (reye.separation / 2 - x)) / (Math.sqrt(Math.pow(z, 2) + Math.pow(reye.separation / 2 - x, 2))) + var rxgoal = reye.idle_pos.x - rxoffset + //ygoal is same for both eyes + var t = Math.max(Math.abs(rxoffset / vel), Math.abs(lxoffset / vel), Math.abs((y * leye.size) / z)) + + leye.pos({ x: lxgoal, y: ygoal }, t); + reye.pos({ x: rxgoal, y: ygoal }, t); +} + + +/* +function to perform idle animation, which consists of looking around the room at random +*/ +function doIdle() { + + var d = new Date(); + var now = d.getTime(); + if (poked) { + if (now - poke_end >= 0) { + zeroFace(poke_time / 2) + poked = false + } + } + + if (blinking) { + if (now - blink_end >= 0) { + au(43, 0) + move_face(blink_time / 2) + blinking = false + } + } + if ((Math.floor((Math.random() * 500)) == 0 && now - last_blink > 2000) || now - last_blink > 8000) { + //console.log('blinky') + blink(300); + last_blink = now; + } + + if (!looking) { + var xrange = windowHalfX; + var yrange = windowHalfY; + var zrange = 1000; + + if ((Math.floor((Math.random() * 500)) == 0 && now - last_gaze > 2000) || now - last_gaze > 5000) { + var xgoal = -xrange / 2 + Math.floor((Math.random() * xrange)); + var ygoal = -yrange / 2 + Math.floor((Math.random() * yrange)); + var zgoal = 2000; + lookat(xgoal, ygoal, zgoal, 700); + last_gaze = now + } + } +} + + +function blink(t) { + au(43, 1.0); + move_face(t / 2) + blink_time = t + + d = new Date() + blink_end = d.getTime() + t / 2; + blinking = true +} + +function animate() { + if (typeof gui === "undefined" && new Date().getTime() > lockIdleUntil) { + doIdle(); + } + + check_and_play_visemes() + TWEEN.update(); +} + + +/* +Puckers the lips, blinks and raises the nose +*/ +function doPoke() { + poke_time = 600 + au(9, 1); + au(43, 1); + au(18, 1); + au(2, 1) + move_face(300) + + d = new Date() + poke_end = d.getTime() + 500; + poked = true; +} + + +function onWindowResize() { + + //TODO: make this look good + + windowHalfX = window.innerWidth / 2; + windowHalfY = window.innerHeight / 2; + + two.width = window.innerWidth + two.height = window.innerHeight + +} + +function onDocumentMouseMove(event) { + /** + mouseX = event.clientX - windowHalfX; + + targetRotation = targetRotationOnMouseDown + ( mouseX - mouseXOnMouseDown ) * 0.02; + //update_goal((event.clientX)/100) + **/ +} + +function onDocumentMouseDown(event) { + + event.preventDefault(); + + mouseX = event.clientX - windowHalfX; + mouseY = windowHalfY - event.clientY; + clickOrTouch(mouseX, mouseY) + +} + +function clickOrTouch(x, y) { + console.log('click') + if (typeof gui === "undefined") { + //don't do the poke animation if the gui is up or else dragging the controllers makes the poke happen + doPoke(); + } + lookat_real_world(0, 0, 60, 1.7); +} + + +function onDocumentMouseUp(event) { +} + +function onDocumentMouseOut(event) { +} + +function onDocumentTouchStart(event) { + mouseX = event.touches[0].pageX - windowHalfX; + mouseY = windowHalfY - event.touches[0].pageY; + clickOrTouch(mouseX, mouseY) +} + +function onDocumentTouchMove(event) { +} diff --git a/harmoni_actuators/harmoni_face/web/src/js/run_misty.js b/harmoni_actuators/harmoni_face/web/src/js/run_misty.js new file mode 100755 index 00000000..fbae7ba1 --- /dev/null +++ b/harmoni_actuators/harmoni_face/web/src/js/run_misty.js @@ -0,0 +1,11 @@ +startFace( + '#000000', //background color + '', //ros_master_uri + .009, //cm per pixel + 0, //viseme compression factor + '#ffffff', '#85e5e5', 125, 100, 250, 70, .7, "round", //eyes (white_color, iris_color, size, height, separation, iris_size, pupil_scale, pupil_shape) + 125, 100, .4, //eyelids (width, height, arch) + '#ff99cc', 0, 10, 0, 0, //nose (color, x, y, width, height) + '#DCDCDC', 8, -80, 100, 10, 9, 1, 4, 50.1, 500.1, //mouth (color, x, y, width, height, thickness, opening, dimple_size, ulip_h_scale, llip_h_scale) + '#DCDCDC', 55, 60, 9, 5 //brows (color, width, height, thickness, innersize) +) diff --git a/harmoni_actuators/harmoni_gesture/config/configuration.yaml b/harmoni_actuators/harmoni_gesture/config/configuration.yaml index 4f49f6ec..9ab0b1c8 100644 --- a/harmoni_actuators/harmoni_gesture/config/configuration.yaml +++ b/harmoni_actuators/harmoni_gesture/config/configuration.yaml @@ -6,3 +6,8 @@ gesture: robot_joint_radians_topic: "qt_robot/joints/state_rad" robot_gesture_topic: "qt_robot/gesture/play" time_interval: 0.01 + default_param: + path: "$(find harmoni_gesture)/data" + robot_gesture_topic: "misty_robot/gesture/play" + robot_joint_radians_topic: "misty_robot/joints/state_rad" + robot_joint_topic: "misty_robot/joints/state" \ No newline at end of file diff --git a/harmoni_actuators/harmoni_gesture/data/Misty/bye.xml b/harmoni_actuators/harmoni_gesture/data/Misty/bye.xml new file mode 100644 index 00000000..3bb1f8a9 --- /dev/null +++ b/harmoni_actuators/harmoni_gesture/data/Misty/bye.xml @@ -0,0 +1,57 @@ + + + bye + 10 + + + move_arms + 0 + -30 + + + + move_head + 0 + 30 + 0 + + + + move_arms + 0 + 0 + + + + move_arms + 0 + -30 + + + + move_arms + 0 + 0 + + + + move_head + 0 + 0 + 0 + + + + move_arms + 0 + -30 + + + + move_arms + 0 + 0 + + + + \ No newline at end of file diff --git a/harmoni_actuators/harmoni_gesture/data/Misty/no.xml b/harmoni_actuators/harmoni_gesture/data/Misty/no.xml new file mode 100644 index 00000000..ad4cfc6c --- /dev/null +++ b/harmoni_actuators/harmoni_gesture/data/Misty/no.xml @@ -0,0 +1,49 @@ + + + no + 2.0 + + + move_head + 0 + -30 + 0 + + + + move_head + 0 + 30 + 0 + + + + move_head + 0 + -30 + 0 + + + + move_head + 0 + 30 + 0 + + + + move_head + 0 + -30 + 0 + + + + move_head + 0 + 0 + 0 + + + + \ No newline at end of file diff --git a/harmoni_actuators/harmoni_gesture/data/Misty/reset.xml b/harmoni_actuators/harmoni_gesture/data/Misty/reset.xml new file mode 100644 index 00000000..b49595b5 --- /dev/null +++ b/harmoni_actuators/harmoni_gesture/data/Misty/reset.xml @@ -0,0 +1,20 @@ + + + reset + 10 + + + move_head + 0 + 0 + 0 + + + + move_arms + 0 + 0 + + + + \ No newline at end of file diff --git a/harmoni_actuators/harmoni_gesture/data/Misty/suspance.xml b/harmoni_actuators/harmoni_gesture/data/Misty/suspance.xml new file mode 100644 index 00000000..d76019ea --- /dev/null +++ b/harmoni_actuators/harmoni_gesture/data/Misty/suspance.xml @@ -0,0 +1,14 @@ + + + no + 2.0 + + + move_head + 0 + 0 + 90 + + + + \ No newline at end of file diff --git a/harmoni_actuators/harmoni_gesture/data/Misty/yes.xml b/harmoni_actuators/harmoni_gesture/data/Misty/yes.xml new file mode 100644 index 00000000..d21c298e --- /dev/null +++ b/harmoni_actuators/harmoni_gesture/data/Misty/yes.xml @@ -0,0 +1,49 @@ + + + no + 2.0 + + + move_head + 20 + 0 + 0 + + + + move_head + -20 + 0 + 0 + + + + move_head + 20 + 0 + 0 + + + + move_head + -20 + 0 + 0 + + + + move_head + 20 + 0 + 0 + + + + move_head + 0 + 0 + 0 + + + + \ No newline at end of file diff --git a/harmoni_actuators/harmoni_gesture/launch/gesture_service_misty.launch b/harmoni_actuators/harmoni_gesture/launch/gesture_service_misty.launch new file mode 100644 index 00000000..857a7eb8 --- /dev/null +++ b/harmoni_actuators/harmoni_gesture/launch/gesture_service_misty.launch @@ -0,0 +1,12 @@ + + + + + + + + + + + + diff --git a/harmoni_actuators/harmoni_gesture/src/harmoni_gesture/gesture_service_misty.py b/harmoni_actuators/harmoni_gesture/src/harmoni_gesture/gesture_service_misty.py new file mode 100755 index 00000000..641ee01d --- /dev/null +++ b/harmoni_actuators/harmoni_gesture/src/harmoni_gesture/gesture_service_misty.py @@ -0,0 +1,337 @@ +#!/usr/bin/env python3 + +# Common Imports +import rospy +import roslib + +from harmoni_common_lib.constants import State, ActuatorNameSpace +from harmoni_common_lib.service_server import HarmoniServiceServer +from harmoni_common_lib.service_manager import HarmoniServiceManager +import harmoni_common_lib.helper_functions as hf + +# Specific Imports +from std_msgs.msg import String, Bool +import numpy as np +import ast +import requests +from requests.exceptions import Timeout +import xml.etree.ElementTree as ET + +class GestureService(HarmoniServiceManager): + """ + Gesture service + """ + + def __init__(self, name, param): + """ Initialization of variables and gesture parameters """ + super().__init__(name) + self.gestures_name = [] + self.gestures_duration = [] + self.gesture_list_received = False + self.gesture_done = False + self.name = name + self.service_id = hf.get_child_id(self.name) + #TODO cancellare non dovrebbe servire + self.gesture_pub = rospy.Publisher( + ActuatorNameSpace.gesture.value + self.service_id, String, queue_size=1 + ) + #TODO cancellare non dovrebbe servire + self.gesture_sub = rospy.Subscriber( + ActuatorNameSpace.gesture.value + self.service_id + "/get_list", + String, + self._get_list_callback, + queue_size=1, + ) + #TODO cancellare non dovrebbe servire + self.gesture_done_sub = rospy.Subscriber( + ActuatorNameSpace.gesture.value + self.service_id + "/done", + Bool, + self._gesture_done_callback, + queue_size=1, + ) + self.state = State.INIT + self.setup_gesture() + self.path = param["path"] + return + + def _gesture_done_callback(self, data): + """Callback function for gesture done + + Args: + data (bool): gesture done (True) + """ + if data: + self.gesture_done = True + + def _get_list_callback(self, data): + """Getting the list from the gesture reader + + Args: + data (str): json of items of gesture for a specific robot + """ + print("List callback") + if self.gestures_name == []: + rospy.loginfo("Gesture list received") + data = ast.literal_eval(data.data) + for item in data: + self.gestures_name.append(item["name"]) + self.gestures_duration.append(item["duration"]) + self.gesture_list_received = True + + def setup_gesture(self): + """ Setup the gesture """ + rospy.loginfo("Setting up the %s" % self.name) + #while not self.gesture_list_received: + # rospy.sleep(0.1) + rospy.loginfo("Received list of gestures") + # self._get_list_callback("{'name':'QT/point_front', 'duration':'4'}") + return + + def do(self, data): + """Do the gesture + + Args: + data (str): it could be a string of: + - object containing {"behavior_data": str} (as results of the TTS synthetization) + - object of: {"name": str, "timing": int} + + Returns: + response (int): state of the DO action + """ + self.state = State.REQUEST + self.actuation_completed = False + if type(data) == str: + data = ast.literal_eval(data) + try: + rospy.loginfo(f"length of data is {len(data)}") + if "behavior_data" in data: + data = ast.literal_eval(data["behavior_data"]) + gesture_data = self._get_gesture_data(data) + else: + gesture_data = data + self.gesture_pub.publish(str(data)) + print(gesture_data) + if gesture_data: + while not self.gesture_done: + self.state = State.REQUEST + self.state = State.SUCCESS + self.gesture_done = False + self.actuation_completed = True + except IOError: + rospy.logwarn("Gesture failed") + self.state = State.FAILED + self.actuation_completed = True + return {"response": self.state} + + def request(self, data): + """Request to the robot to do the gesture + + Args: + data (str): it could be a string of: + - object containing {"behavior_data": str} (as results of the TTS synthetization) + - object of: {"name": str, "timing": int} + + Returns: + response (int): state of the REQUEST action + """ + self.state = State.REQUEST + self.actuation_completed = False + + #if type(data) == str: + # data = ast.literal_eval(data) + + try: + rospy.loginfo(f"length of data is {len(data)}") + if "behavior_data" in data: + data = ast.literal_eval(data["behavior_data"]) + gesture_data = self._get_gesture_data(data) + else: + gesture_data = data + #self.gesture_pub.publish(str(data)) + + print(gesture_data) + if gesture_data: + self.parse_gesture_misty(gesture_data) + while not self.gesture_done: + self.state = State.REQUEST + self.state = State.SUCCESS + self.gesture_done = False + self.actuation_completed = True + self.response_received = True + except Timeout: + rospy.logwarn("Gesture failed: The ip of the robot appears unreachable") + self.state = State.FAILED + self.actuation_completed = True + self.response_received = False + return {"response": self.state} + + def parse_gesture_misty(self, data): + tree = ET.parse(self.path + "/" + ast.literal_eval(data)["gesture"] + ".xml") + root = tree.getroot() + current_time = 0.0 + for child in root.findall("waypoints/point"): + gesture_type = child.find("gesture_type").text + time = float(child.find("Time").text) + if (current_time < time): + #print((time-current_time)) + rospy.sleep((time - current_time)) + current_time = time + if (gesture_type == "move_head"): + url, payload = self.move_head(int(child.find("Pitch").text), int(child.find("Roll").text), int(child.find("Yaw").text)) + elif gesture_type == 'move_arms': + url, payload = self.move_arms(int(child.find("LeftArmPosition").text), int(child.find("RightArmPosition").text)) + else: + #throw_exception + url, payload = None, None + pass + print(url) + print(payload) + try: + response = requests.post(url, + params = payload, + timeout = 1) + except Timeout: + response = requests.post(url, + params = payload, + timeout = 4) + rospy.logwarn("Gesture failed: The ip of the robot appears unreachable") + print(response) + self.gesture_done = True + + def move_head(self, Pitch=0, Roll=0, Yaw=0): + payload = {'Pitch': Pitch, + 'Roll': Roll, + 'Yaw': Yaw + } + print("Sending request") + url = 'http://{}/api/head'.format(rospy.get_param("/robot_ip")) + return url, payload + + def move_arms(self, LeftArmPosition = None, RightArmPosition = None): + payload = {'LeftArmPosition': LeftArmPosition, + 'RightArmPosition': RightArmPosition + } + print("Sending request") + url = 'http://{}/api/arms/set'.format(rospy.get_param("/robot_ip")) + return url, payload + + def _get_gesture_data(self, data): + """Getting the gesture data parsing the output of TTS + + Args: + data (str): string of json {"behavior_data":str} + + Returns: + [bool]: getting the gesture done (True) + """ + if type(data) == str: + data = ast.literal_eval(data) + behavior_data = ast.literal_eval(data["behavior_data"]) + words_data = list(filter(lambda b: b["type"] == "word", behavior_data)) + behavior_set = [] + sentence = [] + for b in behavior_data: + if "id" in b.keys(): + if b["id"] in self.gestures_name: + behavior_set.append(b) + if "character" in b.keys(): + sentence.append(b["value"]) + ordered_gesture_data = list( + sorted(behavior_set, key=lambda face: face["start"]) + ) + print(ordered_gesture_data) + validated_gesture = [] + for gest in ordered_gesture_data: + validated_gesture.append(gest["id"]) + if ordered_gesture_data == []: + rospy.loginfo("No gestures") + return False + timing_word_behaviors = words_data + ordered_gesture_data + ordered_timing_word_behaviors = list( + sorted(timing_word_behaviors, key=lambda behavior: behavior["start"]) + ) + start_time = rospy.Time.now() + for index, behav in enumerate(ordered_timing_word_behaviors[:-1]): + print(ordered_timing_word_behaviors[index]) + if behav["type"] != "word": + print("Here") + while rospy.Time.now() - start_time < rospy.Duration.from_sec( + behav["start"] + ): + pass + gesture_timing = float( + ordered_timing_word_behaviors[index + 1]["start"] + ) # you cannot have a behavior sets at the end of the sentence + rospy.loginfo( + "Play " + + str(behav["id"]) + + " at time:" + + str(behav["start"]) + + " with a duration of: " + + str(gesture_timing) + ) + data = {"gesture": behav["id"], "timing": gesture_timing} + self.gesture_pub.publish(str(data)) + + if ordered_timing_word_behaviors[len(ordered_timing_word_behaviors) - 1]: + if ( + ordered_timing_word_behaviors[len(ordered_timing_word_behaviors) - 1][ + "type" + ] + != "word" + ): + print("Here") + while rospy.Time.now() - start_time < rospy.Duration.from_sec( + ordered_timing_word_behaviors[ + len(ordered_timing_word_behaviors) - 1 + ]["start"] + ): + pass + gesture_timing = float( + ordered_timing_word_behaviors[ + len(ordered_timing_word_behaviors) - 1 + ]["start"] + ) # you cannot have a behavior sets at the end of the sentence + rospy.loginfo( + "Play " + + str( + ordered_timing_word_behaviors[ + len(ordered_timing_word_behaviors) - 1 + ]["id"] + ) + + " at time:" + + str( + ordered_timing_word_behaviors[ + len(ordered_timing_word_behaviors) - 1 + ]["start"] + ) + + " with a duration of: " + + str(gesture_timing) + ) + data = { + "gesture": ordered_timing_word_behaviors[ + len(ordered_timing_word_behaviors) - 1 + ]["id"], + "timing": gesture_timing, + } + self.gesture_pub.publish(str(data)) + return True + + +def main(): + service_name = ActuatorNameSpace.gesture.name + instance_id = rospy.get_param("/instance_id") + service_id = f"{service_name}_{instance_id}" + try: + rospy.init_node(service_name) + params = rospy.get_param(service_name + "/" + instance_id + "_param/") + s = GestureService(service_name, params) + service_server = HarmoniServiceServer(service_id, s) + service_server.start_sending_feedback() + rospy.spin() + except rospy.ROSInterruptException: + pass + + +if __name__ == "__main__": + main() diff --git a/harmoni_actuators/harmoni_gesture/src/harmoni_gesture/misty_gesture_interface.py b/harmoni_actuators/harmoni_gesture/src/harmoni_gesture/misty_gesture_interface.py new file mode 100755 index 00000000..7f56adf7 --- /dev/null +++ b/harmoni_actuators/harmoni_gesture/src/harmoni_gesture/misty_gesture_interface.py @@ -0,0 +1,165 @@ +#!/usr/bin/env python3 + +# Common Imports +import rospy +import roslib + +from harmoni_common_lib.constants import State, ActuatorNameSpace +from harmoni_common_lib.service_server import HarmoniServiceServer +from harmoni_common_lib.service_manager import HarmoniServiceManager +import harmoni_common_lib.helper_functions as hf + +# Specific Imports +# from qt_gesture_controller.srv import * +from std_msgs.msg import String, Bool +from sensor_msgs.msg import JointState +import numpy as np +import ast +import sys +import os +import math +import xml.etree.ElementTree as ET + + +class GestureInterface(HarmoniServiceManager): + """ + Gesture service + """ + + def __init__(self, name, param): + """ Gesture""" + super().__init__(name + "_misty") + self.gestures_name = [] + self.gestures_duration = [] + self.gesture_list = [] + self.read_gesture_done = False + """ Setup Params """ + self.name = name + self.path = param["path"] + self.gesture_topic = param["robot_gesture_topic"] + self.service_id = hf.get_child_id(self.name) + """ Setup the gesture """ + self.gesture_service = rospy.Publisher(self.gesture_topic, String, queue_size=1) + self.gesture_sub = rospy.Subscriber( + ActuatorNameSpace.gesture.value + self.service_id, + String, + self._handle_gesture_callback, + queue_size=1, + ) + self.gesture_pub = rospy.Publisher( + ActuatorNameSpace.gesture.value + self.service_id + "/done", + Bool, + queue_size=1, + ) + self.gesture_list_pub = rospy.Publisher( + ActuatorNameSpace.gesture.value + self.service_id + "/get_list", + String, + queue_size=1, + ) + #self.joint_sub = rospy.Subscriber( + # self.joint_sub_topic, JointState, self._handle_degree + #) + #self.joint_pub = rospy.Publisher(self.joint_pub_topic, JointState, queue_size=1) + """Setup the gesture service as server """ + self.read_gestures(param["path"]) + self.state = State.INIT + return + + def _handle_degree(self, data): + degrees = data.position + radians = [math.radians(d) for d in degrees] + joint_rad = JointState() + joint_rad.position = radians + joint_rad.name = data.name + joint_rad.velocity = data.velocity + joint_rad.effort = data.effort + joint_rad.header = data.header + self.joint_pub.publish(joint_rad) + + def _handle_gesture_callback(self, data): + """Gesture callback """ + done = False + data = ast.literal_eval(data.data) + print(data["gesture"], data["timing"]) + done = self.gesture_to_act(data["gesture"], data["timing"]) + while not done: + rospy.logdebug("Wait until gesture is done") + self.gesture_pub.publish(True) + + def gesture_to_act(self, gesture, timing): + resp = False + for i in range(len(self.gestures_name)): + if self.gestures_name[i] == gesture: + gesture_time_duration = self.gestures_duration[i] + for j in range(1, 5): + gesture_time_duration_x = float(gesture_time_duration) / (j) + if float(timing) < gesture_time_duration_x: + speed = j + else: + speed = 2 # the default speed value + rospy.loginfo( + "The speed of the gesture " + + str(self.gestures_name[i]) + + " is: " + + str(speed) + ) + # I calibrated the speed according to the gesture duration and the timing of the word + self.gesture_service.publish(self.gestures_name[i]) + resp = True + return resp + + def get_files(self, dirName): + # create a list of file and sub directories + # names in the given directory + listOfFile = os.listdir(dirName) + allFiles = list() + # Iterate over all the entries + for entry in listOfFile: + # Create full path + fullPath = os.path.join(dirName, entry) + # If entry is a directory then get the list of files in this directory + if os.path.isdir(fullPath): + allFiles = allFiles + self.get_files(fullPath) + else: + allFiles.append(fullPath) + return allFiles + + def read_gestures(self, path): + if not self.read_gesture_done: + all_files = self.get_files(path) + for filename in all_files: + if not filename.endswith(".xml"): + continue + fullname = filename + tree = ET.parse(fullname) + root = tree.getroot() + for child in root: + if child.tag == "duration": + self.gestures_duration.append(child.text) + elif child.tag == "name": + self.gestures_name.append(child.text) + for index, el in enumerate(self.gestures_name): + self.gesture_list.append( + {"name": str(el), "duration": self.gestures_duration[index]} + ) + self.read_gesture_done = True + self.gesture_list_pub.publish(str(self.gesture_list)) + self.read_gestures(path) + + +def main(): + service_name = ActuatorNameSpace.gesture.name + instance_id = rospy.get_param("/instance_id") + service_id = f"{service_name}_{instance_id}" + + try: + rospy.init_node(service_name + "_qt") + params = rospy.get_param(service_name + "/" + instance_id + "_param/") + s = GestureInterface(service_name, params) + rospy.spin() + except rospy.ROSInterruptException: + pass + + +if __name__ == "__main__": + main() diff --git a/harmoni_actuators/harmoni_gesture/test/gesture_misty.test b/harmoni_actuators/harmoni_gesture/test/gesture_misty.test new file mode 100644 index 00000000..01696ac1 --- /dev/null +++ b/harmoni_actuators/harmoni_gesture/test/gesture_misty.test @@ -0,0 +1,9 @@ + + + + + + + + + diff --git a/harmoni_actuators/harmoni_gesture/test/rostest_gesture_misty.py b/harmoni_actuators/harmoni_gesture/test/rostest_gesture_misty.py new file mode 100755 index 00000000..0027fbfa --- /dev/null +++ b/harmoni_actuators/harmoni_gesture/test/rostest_gesture_misty.py @@ -0,0 +1,84 @@ +#!/usr/bin/env python3 + + +PKG = "test_harmoni_gesture" +# Common Imports +import unittest, rospy, roslib, sys + +# Specific Imports +from actionlib_msgs.msg import GoalStatus +from harmoni_common_msgs.msg import harmoniAction, harmoniFeedback, harmoniResult +from std_msgs.msg import String +from harmoni_common_lib.action_client import HarmoniActionClient +from std_msgs.msg import String +from harmoni_common_lib.constants import ActuatorNameSpace, ActionType +from collections import deque +import os, io +import ast + + +class TestGesture(unittest.TestCase): + def __init__(self, *args): + super(TestGesture, self).__init__(*args) + + def setUp(self): + """ + Set up the client for requesting to harmoni_gesture + """ + rospy.init_node("test_gesture", log_level=rospy.INFO) + self.data = rospy.get_param( + "test_gesture_input" + ) # "{'gesture':'QT/bye', 'timing': 0.5}" + self.instance_id = rospy.get_param("instance_id") + self.result = False + self.name = ActuatorNameSpace.gesture.name + "_" + self.instance_id + self.service_client = HarmoniActionClient(self.name) + self.client_result = deque() + self.service_client.setup_client(self.name, self.result_cb, self.feedback_cb) + # NOTE currently no feedback, status, or result is received. + rospy.Subscriber( + "/harmoni_gesture_default/feedback", harmoniFeedback, self.feedback_cb + ) + rospy.Subscriber("/harmoni_gesture_default/status", GoalStatus, self.status_cb) + rospy.Subscriber( + "/harmoni_gesture_default/result", harmoniResult, self.result_cb + ) + rospy.loginfo("TestGesture: Started up. waiting for gesture startup") + rospy.sleep( + 1 + ) # TODO implement non-magic wait for audio_play node to initialize. + rospy.loginfo("TestGesture: Started") + + def feedback_cb(self, data): + rospy.loginfo(f"Feedback: {data}") + self.result = False + + def status_cb(self, data): + rospy.loginfo(f"Status: {data}") + self.result = False + + def result_cb(self, data): + rospy.loginfo(f"Result: {data}") + self.result = True + + def test_request_response(self): + rospy.loginfo(f"The input data is {self.data}") + self.service_client.send_goal( + action_goal=ActionType.REQUEST.value, + optional_data=self.data, + wait=True, + ) + #s.gesture_pub.publish(test_input) + assert self.result == True + + +def main(): + import rostest + + rospy.loginfo("test_gesture started") + rospy.loginfo("TestGesture: sys.argv: %s" % str(sys.argv)) + rostest.rosrun(PKG, "test_gesture", TestGesture, sys.argv) + + +if __name__ == "__main__": + main() diff --git a/harmoni_actuators/harmoni_speaker/launch/speaker_service.launch b/harmoni_actuators/harmoni_speaker/launch/speaker_service.launch index c86631bd..c4224e15 100755 --- a/harmoni_actuators/harmoni_speaker/launch/speaker_service.launch +++ b/harmoni_actuators/harmoni_speaker/launch/speaker_service.launch @@ -1,13 +1,7 @@ - - - - - - diff --git a/harmoni_actuators/harmoni_speaker/launch/speaker_service_misty.launch b/harmoni_actuators/harmoni_speaker/launch/speaker_service_misty.launch new file mode 100755 index 00000000..8da85312 --- /dev/null +++ b/harmoni_actuators/harmoni_speaker/launch/speaker_service_misty.launch @@ -0,0 +1,8 @@ + + + + + + + + diff --git a/harmoni_actuators/harmoni_speaker/src/harmoni_speaker/speaker_service_misty.py b/harmoni_actuators/harmoni_speaker/src/harmoni_speaker/speaker_service_misty.py new file mode 100755 index 00000000..72bfd3a1 --- /dev/null +++ b/harmoni_actuators/harmoni_speaker/src/harmoni_speaker/speaker_service_misty.py @@ -0,0 +1,186 @@ +#!/usr/bin/env python3 + +# Common Imports +import rospy, rospkg, roslib + +from harmoni_common_lib.constants import State, ActuatorNameSpace +from harmoni_common_lib.service_server import HarmoniServiceServer +from harmoni_common_lib.service_manager import HarmoniServiceManager +import harmoni_common_lib.helper_functions as hf + + +# Specific Imports +from audio_common_msgs.msg import AudioData +import numpy as np +import base64 +import requests +from requests import Request, Session +from requests.exceptions import Timeout +import json +import soundfile as sf + +# import wget +import contextlib +import ast +import wave +import os + + +class SpeakerService(HarmoniServiceManager): + """Takes sound and publishes it to the default audio topic for the audio_play package + + Args: + HarmoniServiceManager ([type]): [description] + """ + + def __init__(self, name): + """ Initialization of variables and camera parameters """ + super().__init__(name) + self.audio_publisher = rospy.Publisher( + "/audio/audio", + AudioData, + queue_size=1, + ) + self.state = State.INIT + self.rospack = rospkg.RosPack() + self.robot_ip = rospy.get_param("/robot_ip") + return + + def request(self, data): + """Publishes audio to the "/audio/audio" topic for the audio_play module + + Converts input audio from bytes or a local/network path to an audio msg. + + Args: + data (str): This could be a string of: + - audio data + - path of local wav file + - link of wav audio file you want to download and heard from + + Returns: + object: It containes information about the response received (bool) and response message (str) + response: the state of the request + """ + #print(data) + self.state = State.REQUEST + self.actuation_completed = False + try: + if type(data) == str: + if ".wav" in data: + data = self.file_path_to_audio_data(data) + duration = data["duration"] + data = data["audio_data"] + else: + data = ast.literal_eval(data) + print(data) + outdir = self.rospack.get_path("harmoni_speaker") + "/temp_data/test.wav" + sf.write(outdir, np.fromstring(data["audio_ogg"], dtype=float), data["audio_frame"]) + + data = self.file_path_to_audio_data(self.rospack.get_path("harmoni_speaker") + "/temp_data/test.wav") + duration = data["duration"] + data = data["audio_data"] + + rospy.loginfo("Writing data for speaker") + rospy.loginfo(f"length of data is {len(data)}") + #Maximum size for the file is 3 Mb + payload = {'fileName':'test.wav', + "data": data, + #"file" : open("/root/harmoni_catkin_ws/src/HARMONI/harmoni_actuators/harmoni_tts/temp_data/tts.wav", 'rb'), + "ImmediatelyApply": True, + #ImmediatelyApply flag make the robot reproduce the file + "OverwriteExisting": True + #Always overwrite the same file, so it's discarded the next time something is reproduced + } + headers = {'Content-type':'application/json'} + print("Sending request") + print('http://{}/api/audio'.format(self.robot_ip)) + s = Session() + req = Request('POST','http://{}/api/audio'.format(self.robot_ip), + data = json.dumps(payload), + headers = headers) + #headers = headers, + prepped = req.prepare() + print(prepped.url) + + response = s.send(prepped, + timeout=1 + ) + print("receiving response") + print(response) + rospy.sleep(duration) + self.state = State.SUCCESS + self.response_received = True + self.result_msg = response.text + rospy.loginfo("Request successfully completed") + except Timeout: + rospy.logwarn("Speaker failed: The ip of the robot appears unreachable") + self.state = State.SUCCESS + self.response_received = True + self.result_msg = "" + return {"response": self.state} + except IOError as e: + rospy.logwarn("Speaker failed: Audio appears too busy") + print(e) + self.state = State.FAILED + self.response_received = True + self.result_msg = "" + return {"response": self.state} + + def file_path_to_audio_data(self, path): + """Returns audio data from a local path or internet link + TODO: Add wget to docker image + + Args: + path (string): string of: + - local folder path + - link of audio file you want to listen to + + Returns: + json: return an object with two fields: + - audio_data: base64 encoded file + - duration: int (duration of the file) + """ + file_handle = path + if "http" in path: + url = path + print("Beginning file download with wget module") + file_handle = ( + self.rospack.get_path("harmoni_speaker") + "/temp_data/test.wav" + ) + #it could be possible also to pass directly wav file + with open(file_handle, 'rb') as f: + text = f.read() + data = base64.b64encode(text) + data = data.decode('utf-8') + with contextlib.closing(wave.open(file_handle, "r")) as f: + frames = f.getnframes() + rate = f.getframerate() + duration = frames / float(rate) + rospy.loginfo(f"The audio lasts {duration} seconds") + if "http" in path: + os.remove(file_handle) + return {"audio_data": data, "duration": duration} + + +def main(): + """Set names, collect params, and give service to server""" + + service_name = ActuatorNameSpace.speaker.name + instance_id = rospy.get_param("/instance_id") + service_id = f"{service_name}_{instance_id}" + + try: + rospy.init_node(service_name) + + s = SpeakerService(service_id) + + service_server = HarmoniServiceServer(service_id, s) + + service_server.start_sending_feedback() + rospy.spin() + except rospy.ROSInterruptException: + pass + + +if __name__ == "__main__": + main() diff --git a/harmoni_actuators/harmoni_speaker/test/rostest_speaker_misty.py b/harmoni_actuators/harmoni_speaker/test/rostest_speaker_misty.py new file mode 100755 index 00000000..c7b13843 --- /dev/null +++ b/harmoni_actuators/harmoni_speaker/test/rostest_speaker_misty.py @@ -0,0 +1,83 @@ +#!/usr/bin/env python3 + + +PKG = "test_harmoni_speaker" +# Common Imports +import unittest, rospy, roslib, sys + +# Specific Imports +from actionlib_msgs.msg import GoalStatus +from harmoni_common_msgs.msg import harmoniAction, harmoniFeedback, harmoniResult +from std_msgs.msg import String +from harmoni_common_lib.action_client import HarmoniActionClient +from std_msgs.msg import String +from harmoni_common_lib.constants import ActuatorNameSpace, ActionType +from collections import deque +import os, io +import ast + + +class TestSpeaker(unittest.TestCase): + def __init__(self, *args): + super(TestSpeaker, self).__init__(*args) + + def setUp(self): + """ + Set up the client for requesting to harmoni_speaker + """ + rospy.init_node("test_speaker", log_level=rospy.INFO) + self.data = rospy.get_param( + "test_speaker_input" + ) # "$(find harmoni_tts)/temp_data/tts.wav" + self.instance_id = rospy.get_param("instance_id") + self.result = False + self.name = ActuatorNameSpace.speaker.name + "_" + self.instance_id + self.service_client = HarmoniActionClient(self.name) + self.client_result = deque() + self.service_client.setup_client(self.name, self.result_cb, self.feedback_cb) + # NOTE currently no feedback, status, or result is received. + rospy.Subscriber( + "/harmoni_speaker_default/feedback", harmoniFeedback, self.feedback_cb + ) + rospy.Subscriber("/harmoni_speaker_default/status", GoalStatus, self.status_cb) + rospy.Subscriber( + "/harmoni_speaker_default/result", harmoniResult, self.result_cb + ) + rospy.loginfo("TestSpeaker: Started up. waiting for speaker startup") + rospy.sleep( + 1 + ) # TODO implement non-magic wait for audio_play node to initialize. + rospy.loginfo("TestSpeaker: Started") + + def feedback_cb(self, data): + rospy.loginfo(f"Feedback: {data}") + self.result = False + + def status_cb(self, data): + rospy.loginfo(f"Status: {data}") + self.result = False + + def result_cb(self, data): + rospy.loginfo(f"Result: {data}") + self.result = True + + def test_request_response(self): + rospy.loginfo(f"The input data is {self.data}") + self.service_client.send_goal( + action_goal=ActionType.REQUEST.value, + optional_data=self.data, + wait=True, + ) + assert self.result == True + + +def main(): + import rostest + + rospy.loginfo("test_speaker started") + rospy.loginfo("TestSpeaker: sys.argv: %s" % str(sys.argv)) + rostest.rosrun(PKG, "test_speaker", TestSpeaker, sys.argv) + + +if __name__ == "__main__": + main() diff --git a/harmoni_actuators/harmoni_speaker/test/speaker.test b/harmoni_actuators/harmoni_speaker/test/speaker.test index f10799e3..4ad6a6ce 100644 --- a/harmoni_actuators/harmoni_speaker/test/speaker.test +++ b/harmoni_actuators/harmoni_speaker/test/speaker.test @@ -1,10 +1,4 @@ - - - - - - diff --git a/harmoni_actuators/harmoni_speaker/test/speaker_misty.test b/harmoni_actuators/harmoni_speaker/test/speaker_misty.test new file mode 100644 index 00000000..61fbb6ee --- /dev/null +++ b/harmoni_actuators/harmoni_speaker/test/speaker_misty.test @@ -0,0 +1,8 @@ + + + + + + + + diff --git a/harmoni_actuators/harmoni_speaker/test/unittest_speaker.py b/harmoni_actuators/harmoni_speaker/test/unittest_speaker.py index 271d8ca3..b2bb1ec5 100644 --- a/harmoni_actuators/harmoni_speaker/test/unittest_speaker.py +++ b/harmoni_actuators/harmoni_speaker/test/unittest_speaker.py @@ -49,4 +49,4 @@ def main(): rosunit.unitrun(PKG, 'test_speaker', TestSpeaker, sys.argv) if __name__ == "__main__": - main() + main() \ No newline at end of file diff --git a/harmoni_actuators/harmoni_speaker/test/unittest_speaker_misty.py b/harmoni_actuators/harmoni_speaker/test/unittest_speaker_misty.py new file mode 100644 index 00000000..993da6f8 --- /dev/null +++ b/harmoni_actuators/harmoni_speaker/test/unittest_speaker_misty.py @@ -0,0 +1,51 @@ +#!/usr/bin/env python3 + + +PKG = 'test_harmoni_speaker' +# Common Imports +import unittest, rospy, rospkg, roslib, sys +#from unittest.mock import Mock, patch +# Specific Imports +from actionlib_msgs.msg import GoalStatus +from harmoni_common_msgs.msg import harmoniAction, harmoniFeedback, harmoniResult +from harmoni_common_lib.constants import State +from std_msgs.msg import String +import os, io +import ast +from src.harmoni_speaker.speaker_service_misty import SpeakerService +import json + +class TestSpeaker(unittest.TestCase): + + def __init__(self, *args): + super(TestSpeaker, self).__init__(*args) + rospy.loginfo("For running unittest you should also run roscore into another terminal") + rospy.init_node("test_speaker") + + def setUp(self): + rospack = rospkg.RosPack() + self.path = rospack.get_path("harmoni_tts") + "/temp_data/tts.wav" + self.result = False + rospy.loginfo("TestSpeaker: Started up. waiting for speaker startup") + self.speaker_service_misty = SpeakerService("test_speaker") + rospy.loginfo("TestSpeaker: Started") + + def test_play(self): + # Send a request to the real API server and store the response. + response = self.speaker_service_misty.request(self.path) + # Confirm that the request-response cycle completed successfully. + rospy.loginfo(response) + if response["response"]==State.SUCCESS: + rospy.loginfo("The response succeed") + self.result = True #set the response to true if the request succeeded + assert(self.result == True) + +def main(): + #TODO convert to a test suite so that setup doesn't have to run over and over. + import rosunit + rospy.loginfo("test_speaker started") + rospy.loginfo("TestSpeaker: sys.argv: %s" % str(sys.argv)) + rosunit.unitrun(PKG, 'test_speaker', TestSpeaker, sys.argv) + +if __name__ == "__main__": + main() diff --git a/harmoni_actuators/harmoni_web/web/assets/imgs/birthday.jpeg b/harmoni_actuators/harmoni_web/web/assets/imgs/birthday.jpeg new file mode 100644 index 00000000..cbc37deb Binary files /dev/null and b/harmoni_actuators/harmoni_web/web/assets/imgs/birthday.jpeg differ diff --git a/harmoni_actuators/harmoni_web/web/index.html b/harmoni_actuators/harmoni_web/web/index.html index 7108a258..b9f5c6e1 100644 --- a/harmoni_actuators/harmoni_web/web/index.html +++ b/harmoni_actuators/harmoni_web/web/index.html @@ -12,7 +12,7 @@ - + Harmoni Web diff --git a/harmoni_actuators/harmoni_web/web/src/config/config.json b/harmoni_actuators/harmoni_web/web/src/config/config.json index a7f00eef..fee59a16 100644 --- a/harmoni_actuators/harmoni_web/web/src/config/config.json +++ b/harmoni_actuators/harmoni_web/web/src/config/config.json @@ -22,11 +22,6 @@ "component": "img", "children": "../assets/imgs/test.jpg", "id": "img_test2" - }, - { - "component": "img", - "children": "../assets/imgs/test.jpg", - "id": "img_test3" } ] }, @@ -53,6 +48,118 @@ } ] }, + { + "component": "container", + "id": "QA_container", + "children": [ + { + "component": "title", + "children": "", + "id": "title_QA" + }, + { + "component": "row", + "id": "row_4", + "children": [ + { + "component": "card", + "comp_class":"option_choice", + "children": "", + "id": "img_1" + }, + { + "component": "card", + "comp_class":"option_choice", + "children": "", + "id": "img_2" + }, + { + "component": "card", + "comp_class":"option_choice", + "children": "", + "id": "img_3" + }, + { + "component": "card", + "comp_class":"option_choice", + "children": "", + "id": "img_4" + }, + { + "component": "card", + "comp_class":"option_choice", + "children": "", + "id": "img_5" + }, + { + "component": "card", + "comp_class":"option_choice", + "children": "", + "id": "img_6" + } + ] + } + ] + }, + { + "component": "container", + "id": "QA_container_text", + "children": [ + { + "component": "title", + "children": "", + "id": "title_QA_txt" + }, + { + "component": "row", + "id": "row_5", + "children": [ + { + "component": "card_text", + "comp_class":"option_choice", + "children": "", + "id": "txt_1" + }, + { + "component": "card_text", + "comp_class":"option_choice", + "children": "", + "id": "txt_2" + }, + { + "component": "card_text", + "comp_class":"option_choice", + "children": "", + "id": "txt_3" + } + ] + }, + { + "component": "row", + "id": "row_6", + "children": [ + { + "component": "card_text", + "comp_class":"option_choice", + "children": "", + "id": "txt_4" + }, + { + "component": "card_text", + "comp_class":"option_choice", + "children": "", + "id": "txt_5" + }, + { + "component": "card_text", + "comp_class":"option_choice", + "children": "", + "id": "txt_6" + } + ] + } + ] + }, { "component": "container", "id": "code_container", @@ -223,7 +330,7 @@ { "component": "button", "children": "Send", - "id": "button_1" + "id": "button_10" } ] } @@ -299,7 +406,7 @@ "children": [ { "component": "img", - "children": "../assets/imgs/logo.svg", + "children": "../assets/imgs/birthday.jpeg", "id": "main_img_alt" } ] diff --git a/harmoni_actuators/harmoni_web/web/src/css/style.css b/harmoni_actuators/harmoni_web/web/src/css/style.css index db0077a4..1937e2ad 100755 --- a/harmoni_actuators/harmoni_web/web/src/css/style.css +++ b/harmoni_actuators/harmoni_web/web/src/css/style.css @@ -54,4 +54,4 @@ img:hover { margin: auto; font-size: 3vmin; text-align: center; -} +} \ No newline at end of file diff --git a/harmoni_actuators/harmoni_web/web/src/css/style_qa.css b/harmoni_actuators/harmoni_web/web/src/css/style_qa.css new file mode 100755 index 00000000..05ddd046 --- /dev/null +++ b/harmoni_actuators/harmoni_web/web/src/css/style_qa.css @@ -0,0 +1,61 @@ +html, +body { + height: 100%; + min-height: 100%; + margin: 0; + padding: 1%; + overflow: auto; + font-family: "Palatino Linotype", "Book Antiqua", Palatino, serif; + text-align: center; + font-size: 2vmin; + background-color: white; +} + +.container{ + height: 100%; + width: 100%; + margin: 0; + text-align: center; + max-width: 100vw; +} + +.button_try{ + margin: auto; + margin-top: 5%; + text-align: center; + width: 100%; + aspect-ratio: 10/3; + font-size: 4vmin; +} + +div{ + margin: 0; + text-align: center; + +} + +img{ + margin: auto; + text-align: center; + vertical-align: auto; + width: 30%; +} + +text{ + display: inline-block; + margin: auto; + text-align: center; + vertical-align: auto; + width: 40%; +} + +img:hover { + opacity: 0.5; + } + + +.title{ + margin: auto; + font-size: 4vmin; + text-align: center; +} diff --git a/harmoni_actuators/harmoni_web/web/src/js/run.js b/harmoni_actuators/harmoni_web/web/src/js/run.js index b8254d66..4cb193ea 100644 --- a/harmoni_actuators/harmoni_web/web/src/js/run.js +++ b/harmoni_actuators/harmoni_web/web/src/js/run.js @@ -8,4 +8,10 @@ if (!IS_DEBUG_DISPLAY) { rosInit(ROS_MASTER_URI); } else { setup_cycle_through_displays(display_idx); -} \ No newline at end of file +} + +function btnChrome_onclick() { + document.documentElement.webkitRequestFullScreen(); + } + +btnChrome_onclick() \ No newline at end of file diff --git a/harmoni_core/harmoni_decision/config/configuration.yaml b/harmoni_core/harmoni_decision/config/configuration.yaml index 069a083b..26e78a2e 100644 --- a/harmoni_core/harmoni_decision/config/configuration.yaml +++ b/harmoni_core/harmoni_decision/config/configuration.yaml @@ -1,13 +1,13 @@ # This is a configuration file describing the id of the child of each repo harmoni: - bot: ["default"] + #bot: ["default"] tts: ["default"] - web: ["default"] - stt: ["default"] + #web: ["default"] + #stt: ["default"] #face_detect: ["default"] hardware: - microphone: ["default"] + #microphone: ["default"] #camera: ["default"] face: ["default"] speaker: ["default"] - # gesture: ["default"] + gesture: ["default"] \ No newline at end of file diff --git a/harmoni_core/harmoni_decision/config/misty_configuration.yaml b/harmoni_core/harmoni_decision/config/misty_configuration.yaml new file mode 100644 index 00000000..7061826d --- /dev/null +++ b/harmoni_core/harmoni_decision/config/misty_configuration.yaml @@ -0,0 +1 @@ +robot_ip: "192.168.197.11" \ No newline at end of file diff --git a/harmoni_core/harmoni_decision/launch/harmoni_decision.launch b/harmoni_core/harmoni_decision/launch/harmoni_decision.launch index 6b426f0d..70430d65 100644 --- a/harmoni_core/harmoni_decision/launch/harmoni_decision.launch +++ b/harmoni_core/harmoni_decision/launch/harmoni_decision.launch @@ -6,7 +6,7 @@ - + @@ -16,4 +16,4 @@ - + \ No newline at end of file diff --git a/harmoni_core/harmoni_pattern/CMakeLists.txt b/harmoni_core/harmoni_pattern/CMakeLists.txt index 08779c06..059d6102 100644 --- a/harmoni_core/harmoni_pattern/CMakeLists.txt +++ b/harmoni_core/harmoni_pattern/CMakeLists.txt @@ -19,7 +19,7 @@ find_package(catkin REQUIRED COMPONENTS ## Uncomment this if the package has a setup.py. This macro ensures ## modules and global scripts declared therein get installed ## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html -#catkin_python_setup() +catkin_python_setup() ################################################ ## Declare ROS messages, services and actions ## diff --git a/harmoni_core/harmoni_pattern/config/configuration.yaml b/harmoni_core/harmoni_pattern/config/configuration.yaml index 52b6069a..6b69bf25 100644 --- a/harmoni_core/harmoni_pattern/config/configuration.yaml +++ b/harmoni_core/harmoni_pattern/config/configuration.yaml @@ -23,6 +23,11 @@ mic_test: trigger_intent: "Hey" pattern_scripting: $(find harmoni_pattern)/pattern_scripting/mic_test.json +misty_demo: + default_param: + trigger_intent: "Hey" + pattern_scripting: $(find harmoni_pattern)/pattern_scripting/misty_demo.json + speak_test: default_param: diff --git a/harmoni_core/harmoni_pattern/nodes/sequential_pattern.py b/harmoni_core/harmoni_pattern/nodes/sequential_pattern.py index 5de157f6..a2aaab30 100755 --- a/harmoni_core/harmoni_pattern/nodes/sequential_pattern.py +++ b/harmoni_core/harmoni_pattern/nodes/sequential_pattern.py @@ -158,6 +158,7 @@ def start(self): elif self.script[self.script_set_index]["set"] == "sequence": # self.count = -1 + print("good") self.do_steps(self.script[self.script_set_index]["steps"]) elif self.script[self.script_set_index]["set"] == "loop": @@ -169,7 +170,7 @@ def start(self): break self.script_set_index += 1 r.sleep() - return + return self.result def stop(self): """Stop the Pattern Player """ @@ -226,7 +227,6 @@ def setup_services(self, setup_steps): callback_args=service, queue_size=1, ) - return def do_steps(self, sequence, looping=False): @@ -252,7 +252,8 @@ def do_steps(self, sequence, looping=False): passthrough_result = self.handle_step(step, passthrough_result) rospy.loginfo(f"************* End of sequence step: {cnt} *************") - + #print(f"passthrough result:{passthrough_result}") + self.result = passthrough_result if looping: rospy.loginfo("Done with a loop!") if not rospy.is_shutdown(): @@ -407,7 +408,6 @@ def get_new_result(self, service): r = rospy.Rate(1) while result["time"] < call_time and not rospy.is_shutdown(): rospy.loginfo(f"got old message length ({len(result['data'])})") - if len(self.client_results[service]) > 0: result = self.client_results[service].popleft() @@ -418,8 +418,52 @@ def get_new_result(self, service): rospy.loginfo( f"Recieved result message length ({len(result['data'])}) from service {service}" ) + #print(f"result : {result}" ) return result["data"] + def reset_init(self): + self.script_set_index = 0 + self.end_pattern = False + for client in self.scripted_services: + self.client_results[client] = deque() + self.state = State.INIT + + def request(self, data): + """Send goal request to appropriate child""" + rospy.loginfo("Start the %s request" % self.name) + rospy.loginfo(data) + if isinstance(data,str): + data = ast.literal_eval(data) + self.script = data + self.state = State.REQUEST + r = rospy.Rate(10) + while self.script_set_index < len(self.script) and not rospy.is_shutdown(): + if self.script[self.script_set_index]["set"] == "setup": + self.setup(self.script[self.script_set_index]["steps"]) + elif self.script[self.script_set_index]["set"] == "sequence": + self.count = -1 + self.do_steps(self.script[self.script_set_index]["steps"]) + elif self.script[self.script_set_index]["set"] == "loop": + self.count = -1 + self.do_sequence( + self.script[self.script_set_index]["steps"], looping=True, data=data + ) + #elif self.end_pattern: + ##TODO + # break + self.script_set_index += 1 + r.sleep() + rospy.loginfo("_________SEQUENCE PATTERN END__________") + prepared = [dict(zip(cl, self.client_results[cl])) for cl in self.client_results] + j = json.dumps(prepared) + self.result_msg = str(j) + self.response_received = True + self.actuation_completed = True + self.state = State.SUCCESS + return self.result + + + def main(): """Set names, collect params, and give service to server""" diff --git a/harmoni_core/harmoni_pattern/pattern_scripting/misty_demo.json b/harmoni_core/harmoni_pattern/pattern_scripting/misty_demo.json new file mode 100644 index 00000000..b16c8767 --- /dev/null +++ b/harmoni_core/harmoni_pattern/pattern_scripting/misty_demo.json @@ -0,0 +1,74 @@ +[ + { + "set": "sequence", + "steps": [ + { + "tts_default": { + "action_goal": "REQUEST", + "resource_type": "service", + "wait_for": "new", + "trigger":"Hi My name is Misty!" + } + }, + [ + { + "speaker_default": { + "action_goal": "REQUEST", + "resource_type": "actuator", + "wait_for": "new", + "trigger": "/root/harmoni_catkin_ws/src/HARMONI/harmoni_actuators/harmoni_tts/temp_data/tts.wav" + } + }, + { + "face_mouth_default": { + "action_goal": "DO", + "resource_type": "actuator", + "wait_for": "new" + } + }, + { + "gesture_default": { + "action_goal": "REQUEST", + "resource_type": "actuator", + "wait_for": "new", + "trigger":"{'gesture':'Misty/bye', 'timing': 0.5}" + } + } + ], + + { + "tts_default": { + "action_goal": "REQUEST", + "resource_type": "service", + "wait_for": "new", + "trigger":"Start of the interaction" + } + }, + [ + { + "speaker_default": { + "action_goal": "REQUEST", + "resource_type": "actuator", + "wait_for": "new", + "trigger": "/root/harmoni_catkin_ws/src/HARMONI/harmoni_actuators/harmoni_tts/temp_data/tts.wav" + } + }, + { + "face_mouth_default": { + "action_goal": "DO", + "resource_type": "actuator", + "wait_for": "new" + } + }, + { + "gesture_default": { + "action_goal": "REQUEST", + "resource_type": "actuator", + "wait_for": "new", + "trigger":"{'gesture':'Misty/no', 'timing': 0.5}" + } + } + ] + ] + } +] \ No newline at end of file diff --git a/harmoni_core/harmoni_pattern/pattern_scripting/misty_interaction.json b/harmoni_core/harmoni_pattern/pattern_scripting/misty_interaction.json new file mode 100644 index 00000000..4e5dce80 --- /dev/null +++ b/harmoni_core/harmoni_pattern/pattern_scripting/misty_interaction.json @@ -0,0 +1,98 @@ +[ + { + "set": "sequence", + "steps": [ + { + "web_default": { + "action_goal": "DO", + "resource_type": "service", + "wait_for": "new", + "trigger": "[{'component_id':'type_dialogue_container', 'set_content':''}]" + } + }, + { + "bot_default": { + "action_goal": "REQUEST", + "resource_type": "service", + "wait_for": "new", + "trigger": "hey" + } + }, + + { + "tts_default": { + "action_goal": "REQUEST", + "resource_type": "service", + "wait_for": "new" + } + }, + [ + { + "gesture_default": { + "action_goal": "REQUEST", + "resource_type": "actuator", + "wait_for": "new", + "trigger":"{'gesture':'Misty/bye', 'timing': 0.5}" + } + }, + { + "speaker_default": { + "action_goal": "REQUEST", + "resource_type": "actuator", + "wait_for": "" + } + }, + { + "face_mouth_default": { + "action_goal": "DO", + "resource_type": "actuator", + "wait_for": "new" + } + } + ] + ] + }, + { + "set": "loop", + "steps": [ + { + "web_default": { + "action_goal": "REQUEST", + "resource_type": "service", + "wait_for": "new", + "trigger": "[{'component_id':'type_dialogue_container', 'set_content':''}]" + } + }, + { + "bot_default": { + "action_goal": "REQUEST", + "resource_type": "service", + "wait_for": "new" + } + }, + { + "tts_default": { + "action_goal": "REQUEST", + "resource_type": "service", + "wait_for": "new" + } + }, + [ + { + "speaker_default": { + "action_goal": "REQUEST", + "resource_type": "actuator", + "wait_for": "new" + } + }, + { + "face_mouth_default": { + "action_goal": "DO", + "resource_type": "actuator", + "wait_for": "new" + } + } + ] + ] + } +] \ No newline at end of file diff --git a/harmoni_dialogues/harmoni_bot/launch/bot_service.launch b/harmoni_dialogues/harmoni_bot/launch/bot_service.launch index 5ea5c4de..81d9b100 100644 --- a/harmoni_dialogues/harmoni_bot/launch/bot_service.launch +++ b/harmoni_dialogues/harmoni_bot/launch/bot_service.launch @@ -15,3 +15,4 @@ +