Frequently Asked Questions

Have Questions? Check Out Our FAQs or Reach Out!


Autonomous robots operate by combining various technologies and components to operate without direct human intervention. Here’s a general overview of how they function:

  1. Perception: Autonomous robots are equipped with sensors that allow them to perceive their environment. These sensors can include cameras, lidar, radar, sonar, and more. They collect data about the robot’s surroundings, such as objects, distances, and environmental conditions.
  2. Sensing and Data Processing: The sensor data is processed by the robot’s onboard computer. Algorithms and techniques, such as computer vision, image processing, and sensor fusion, are employed to interpret the sensory information and create a representation of the environment.
  3. Decision-Making: Based on the perception of the environment, the robot’s control system generates decisions and actions. This can involve rule-based programming, algorithms, or advanced AI techniques. AI-based systems, like neural networks or reinforcement learning, can enable the robot to learn and improve its decision-making abilities over time.
  4. Planning and Navigation: Autonomous robots need to plan and navigate their environment to achieve their goals. They use algorithms like path planning, motion planning, and localization to determine the best actions to take. This includes avoiding obstacles, optimizing routes, and reaching desired destinations efficiently.
  5. Actuation: Once the robot has determined its course of action, it needs to physically interact with the environment. Actuators, such as motors, servos, or hydraulic systems, enable the robot to execute the desired movements or tasks.
  6. Feedback and Adaptation: Autonomous robots often incorporate feedback mechanisms to validate and refine their actions. They can use additional sensors or feedback from the environment to monitor the outcomes of their actions. If the results differ from the expected outcome, the robot can adjust its behavior or learn from the experience to improve future performance.

It’s important to note that the specific technologies and architectures used in autonomous robots can vary depending on the application and complexity of the tasks they perform. Furthermore, autonomous robots are an active area of research and development, and new advancements continue to enhance their capabilities and autonomy.

The answer is no. Rather than replacing humans, the potential lies in the collaboration between humans and robots. Autonomous robots can assist humans in performing tasks, increase productivity, improve safety, and provide support in various fields. The focus should be on finding ways to leverage the strengths of both humans and robots, creating a symbiotic relationship that enhances overall capabilities and productivity.

Using a delivery robot inside a space can be safe if proper precautions and safety measures are in place. Here are some factors to consider:

  1. Navigation and Obstacle Avoidance: Delivery robots should be equipped with reliable navigation systems and sensors to safely navigate through the hotel environment. They should be capable of detecting and avoiding obstacles such as furniture, guests, and other objects in their path.
  2. Speed and Control: The speed of the delivery robot should be appropriate for the hotel environment to ensure safe operation. It should have the ability to slow down or stop when encountering unexpected situations or crowded areas.
  3. Emergency Stop and Manual Override: The robot should have an emergency stop button or mechanism that allows immediate human intervention if necessary. Additionally, there should be a manual override option to control the robot manually in case of any malfunction or emergency.
  4. Human Interaction and Communication: The robot should be designed to interact safely with hotel guests and staff. It should have clear visual cues and indicators to communicate its intentions, such as stopping, turning, or passing through. It should also be programmed to respond appropriately to human interactions or requests.
  5. Security and Privacy: Delivery robots should be designed with security and privacy considerations in mind. They may carry sensitive information or valuable items, so measures should be in place to ensure the security of the robot and its contents. Additionally, data collected by the robot, such as guest information or room numbers, should be handled securely and in accordance with privacy regulations.
  6. Staff Training and Maintenance: Hotel staff should be trained on how to interact with the delivery robot and understand its capabilities and limitations. Regular maintenance and inspections should be conducted to ensure the robot is in good working condition and to address any potential safety issues.

Further to the above, all robots should possess the necessary certifications under the relevant EU directives in order to operate smoothly and safely inside complex and crowded environments, such as a restaurant or a hotel. 

VSLAM stands for Visual Simultaneous Localization and Mapping. It is a technique used in robotics and computer vision to enable robots or autonomous systems to navigate and map their environments using visual information. VSLAM involves the real-time analysis of visual sensor data, such as images or video streams, to simultaneously estimate the robot’s position and create a map of its surroundings. 

Here’s an overview of how VSLAM works:

  1. Feature Extraction: VSLAM algorithms typically start by extracting distinctive features from the visual sensor data, such as corners, edges, or keypoints. These features serve as landmarks for subsequent localization and mapping.
  2. Localization: The VSLAM algorithm analyzes the extracted features to estimate the robot’s position within the environment. It compares the observed features with the features in its map to determine the most likely location of the robot. This process is known as localization or pose estimation.
  3. Mapping: As the robot moves through the environment, the VSLAM algorithm simultaneously constructs a map using the observed features and their estimated positions. It incrementally builds the map by adding new landmarks and updating existing ones.
  4. Loop Closure: VSLAM algorithms employ loop closure techniques to detect previously visited locations and close loops in the map. By recognizing places it has been before, the algorithm can correct errors in the estimated trajectory and improve map accuracy.
  5. Odometry Fusion: In addition to visual information, VSLAM algorithms often integrate data from other sensors, such as wheel encoders or inertial measurement units (IMUs), to enhance the accuracy and robustness of the localization and mapping process. This fusion of sensor data is known as sensor fusion or odometry fusion.


The typical area that they can cover is 40000 ㎡ (200*200㎡ ). 

Please fill the form with your company’s details, the tasks and the needs that you have to solve on a daily basis (working hours, distances, personnel, etc.), purpose (working or promotional) of the robot and some photos of your space of operation. You will get information about the robot around your field of interest and we will further contact you for an offer. We can arrange an online or on-site demo presentation in order to familiarize yourself with the robots.

KEEN ON gives a coverage of 2 years for its robots. However, we can discuss for an extension of this period by our after-sales team under the relevant cost. 

T-series Dinerbots and Butlerbot W3 operate with Android operating system. They have  capabilities for extended services and interconnection with mobile apps for remote call and notification bell used by your guests.  IOS users can download the app by searching “Keenon Robotics” or “keenon” on the App Store. Android users can download the app by searching “Keenon Robotics” on GooglePlay.

The main differences between T9pro and T5pro as below:

1.T9pro support big operation screen,

2.Die cast chassis(weighs less),

3.Vslam (T9 Pro features the new VSLAM sensor on its top which recognize the characteristics of the ceiling directly.)

Yes, T10 supports guidance mode. The customer can customize the guidance video. 

Minimum Passage Width is 60 cm (58.5 cm)

Yes, there are two screen on T10,11.6 inch operation screen and 23.8 inch advertisement. The customer can display different video in these two screens. 

For 23.8 inch screen , Pic: Less than 4 pictures or Videos, Dimension:1080*1920,Size of each file:Less than 300kb. 

It can support 80000 ㎡ (No limitation on Length and Width).

Yes, The Upper 3 Layers with Plate Detection for Self-Pickup.

        First layer: 485*410*240mm

        2nd layer: 485*410*215mm

        3rd layer: 485*410*205mm


Disinfection robots are equipped with advanced sensors and technologies to detect human presence and ensure safety during disinfection operations. Here’s an explanation of how they detect human presence and deactivate the UV mode:

  1. Environmental Analysis/Mapping: The disinfection robots are equipped with sensors, including cameras and depth sensors, that enable them to analyze the environment. These sensors can detect the presence of objects and identify human shapes or movements within their range.
  2. Object Recognition: Using computer vision algorithms, the robots can recognize and distinguish between different objects in their surroundings. This includes recognizing the shape and movement patterns associated with human beings.
  3. Real-time Monitoring: The robots continuously monitor the environment during the disinfection process. They process the data from sensors in real-time to detect any changes or presence of humans in the vicinity.
  4. Safety Protocols: When the disinfection robots detect the presence of humans, they have built-in safety protocols to deactivate the UV mode. This ensures that the UV light, which can be harmful to human skin and eyes, is immediately turned off to prevent any potential harm or discomfort.
  5. Adaptive Behavior: The robots are designed to adapt their behavior based on the detected human presence. They can pause or modify their disinfection path to avoid direct contact or interference with individuals present in the area.

By integrating these detection mechanisms and safety protocols, disinfection robots can effectively identify human presence and take appropriate actions to ensure the safety of individuals nearby. This feature helps mitigate any potential risks associated with UV disinfection and minimizes the possibility of harm to humans during the disinfection process.

Yes, it is possible. Disinfection robots have a user-friendly interface that allows operators or users to manually activate or deactivate the UV mode if necessary. This feature provides an additional level of control and flexibility during the disinfection process.

Manual activation or deactivation of the UV mode can be useful in situations where specific areas or objects require targeted disinfection or when human presence is detected in close proximity to the robot. By allowing operators or users to have control over the UV mode, it enables them to adapt the disinfection process to specific requirements or safety considerations in real-time.

Certainly! Here are a few examples of specific safety considerations that may require manual control of the UV mode in disinfection robots :

  1. Proximity to Humans: If a disinfection robot detects a human in close proximity, manual control of the UV mode can be used to deactivate it. This ensures that the UV light, which can be harmful to human skin and eyes, is turned off to prevent any potential harm or discomfort.
  2. Delicate or Sensitive Objects: Certain objects or surfaces may be sensitive to UV light exposure. In situations where there are delicate materials, electronic devices, or artwork that could be damaged by UV radiation, operators or users can manually deactivate the UV mode to avoid any potential harm or deterioration.
  3. Restricted or Occupied Areas: In some cases, there may be areas that are temporarily restricted or occupied by individuals during the disinfection process. By manually deactivating the UV mode, operators or users can ensure the safety and comfort of people in those areas, while still allowing the robot to perform other disinfection operations using alternative methods.
  4. Operator Intervention: Manual control of the UV mode can also be useful when operators or users need to intervene in the disinfection process for any unforeseen circumstances or emergencies. They can quickly deactivate the UV mode to address the situation and then resume disinfection operations when appropriate.

These are just a few examples, and the specific safety considerations may vary depending on the environment, regulations, and specific requirements of the disinfection process. It’s important to assess the situation and exercise caution when determining whether to manually activate or deactivate the UV mode, considering the potential risks and the well-being of individuals in the vicinity.

Dissinfection robots offer three task modes to accommodate different requirements. The instant task mode allows for immediate disinfection actions, the scheduled task mode enables programmed disinfection at specific times, and the remote task mode facilitates disinfection management from a remote location. These task modes enhance the adaptability and convenience of the disinfection process.

Elevate your Business with Smart Bee Robotics

Need More Help?
Contact Us for Personalized Assistance

Please fill the form to download the pdf file

    Full Name
    Company Name
    Company Type
    Mobile Telephone Number
    Company Telephone Number
    Describe the area of employment?
    Number of employers in service
    , Example:20

    Please fill the form to download the pdf file

      Full Name
      Company Name
      Company Type
      Mobile Telephone Number
      Company Telephone Number
      Describe the area of employment?
      Number of employers in service
      , Example:20
      Our website uses cookies, mainly from 3rd party services. Click to accept these cookies or make the needed changes in your browser settings.