Computer Aided Tablevision

is614
16 min readNov 7, 2021

--

An application of IoT to detect table cleanliness

1. Problem Statement

1.1 Background

Dirty trays, crockery and litter left behind by diners on the food centres’ tables is a social issue that had taken the center stage especially since the breakout of COVID-19. From 1st September 2021, the National Environment Agency (NEA) of Singapore will take enforcement action against diners who do not clean up after their meals. With this enforcement, we expect diners to clear the tables of their trays and crockery. However, a new issue begets— are the tables cleaned? Visually, without the trays and crockery on the tables, it is hard to tell if the tables are cleaned. In the following section, we discuss the problem statements for this project.

1.2 Problem Statements

  • Diners : How might we provide cleaned tables for diners at food centres after the tables have been cleared by previous diners?
  • Cleaners: How might we provide cleaners with an efficient and effective way to know which tables had been previously occupied?
  • Estate owners or managers : How might we provide information to estate managers on cleanliness of food centres and utilisation of resources?

1.3 Flaws of Existing Solutions

Currently, there is no existing solution that addresses the cleanliness of the tables. The closest initiatives are addressing the tray and crockery return after food consumption as follows:

  • Manual detection: Conventionally, cleaners have to check if the table need to clean or not by sight. At times, diners may complain and remind the cleaners to wipe down their table. Such approach is neither effective nor efficient to address the needs.
  • IOT solutions: There are two existing IOT solutions from SMU-X, which mainly aims to collect data regarding the tray return by diners. These solutions neither fully tap the potential of IOT nor effectively address the real needs (enforce the diners to return tray). Fundamentally, we are addressing a different problem — the cleanliness of the table.
    (Details of the SMU-X projects can be found at: Internet of Trays, Your Singapore Hawker and Our hawker centre dilemma: “Must return trays meh?”).

1.4 Our Solution

Our Computer Aided Tablevision (CAT) project aims to use IoT technology to track the cleanliness status of the tables and regularity of cleaning following the enforcement of the tray and crockery return by diners. Solution can inform the management of the centre’s cleanliness and make cleaners’ work more efficient and effective, thereby giving diners clean tables.

Compared with the previous IOT solutions, our solution can:

  • collect more accurate and informative data using low-cost components
  • signal the cleaners to clean the table timely
  • provide analysis for management reporting and planning

2. High-Level Solution Design

2.1 IoT component and system architecture

The three main components of our IoT system are as follow:

Main components of IoT system

The system architecture which links all three main components together is summarised below.

System architecture

2.1.1 Things

For things, the Raspberry Pi acts as the embedded device providing computing power and memory, and as a gateway connecting to the internet. The Raspberry Pi has a mature and well-supported ecosystem. We have identified Pi3B+ as a suitable device due to widespread availability, reasonable compute power, memory, camera port and built-in wireless LAN. It is also more affordable compared to the latest Pi4B and will remain in production until at least January 2026.

Raspberry Pi 3 Model B+

Raspberry Pi has recently released a new Pi Zero 2 W that is similar in specifications to the Pi3B+. It has similar compute power, memory, camera port and build-in wireless LAN, and thus will meet our project requirements. The tiny form factor and low price point make for ideal production deployment of our solution.

Raspberry Pi Zero 2 W (L) and casing with Camera (R)

The sensing modality is image processing. A Pi Camera module is connected directly via CSI-2 port to the Raspberry Pi. QR code detection is performed on images captured locally on the Raspberry Pi (edge processing) to reduce the amount of data transferred between our Raspberry Pi and Splunk data logger i.e. no image file is stored or sent out to the internet. This has better scalability as image processing and transmission are inherently costly. Image capturing and storage also has high privacy concerns, as photographs or videos are considered private data as they can be used to uniquely identify a person. A 5-megapixel Pi Camera module that can capture FHD (1920x1080) videos produces enough image quality to capture the QR codes. However, due to obsolescence, it is no longer in production but can be sourced in the streets for half the price of the newer 8-megapixel Pi Camera module 2 that replaced it. Due to hardware limitations, both Pi3B+ and Pi4B can only encode 1080p30 videos, rendering the additional pixels useless.

Raspberry Pi3 with camera in casing for deployment

In our solution, we use QR code detection as the main approach to collect contextual data. QR codes are printed, laminated, and pasted on each seat position at the table in food centre and a camera system is mounted directly overhead on the ceiling or overhead structure. Using periodic captures of QR codes, table occupancy can be derived as the QR code at the seat position will not be detected when a diner is at the table. Cleaners with QR code on their headwear can be detected when they clean at the table. This contextual data allows us to create real-time monitoring and management dashboard to provide an overview of table occupancy and assessment of table cleanliness. In addition, a notification dashboard of real time table status for the cleaner could help them perform their work more efficiently by highlighting tables which require cleaning.

QR codes on tables and apparels

The image capture, processing and transmitting script is written in Python, utilising the following main libraries:

  • PiCamera: python interface to Raspberry Pi camera module
  • OpenCV: computer vision library and tools
  • Pyzbar: detection and decoding of QRcode
  • splunk_http_event_collector: send events to a Splunk HTTP event collector

We maximised the capabilities of the Pi3B+ by multi-threading to utilize the multi-core capabilities of the Pi CPU. Also, OpenCV package was installed by building the library from the source instead of the standard and older version in the Raspbian repository. This makes use of the NEON and VFPV3 optimisations for the Pi’s ARM GPU, improving the video encoding performance. The result is a stable data acquisition pipeline that can support the desired accuracy and precision in QRcode detection.

2.1.2 Connectivity

Once Raspberry Pi decodes the detected QR codes, this information is pushed to Splunk server through internet connection via a flat topology. Raspberry Pi can establish connection through WiFi router access point which is usually available in food centre via the Wireless@SG programme. To further enhance security, a dedicated SSID to support the device deployment can be carved out to isolate the network from casual Wireless@SG users. The data is sent to the Splunk’s HTTP event collector (HEC) in the form of a log event using JavaScript Object Notation (JSON) format. Each Raspberry Pi is issued with a token to authenticate the connection with the server before being allowed to send the data. Tokens are entities that let logging agents and HTTP clients connect to the HEC input. Each token has a unique value, which is a 128-bit number that is represented as a 32-character globally unique identifier (GUID). Agents and clients use a token to authenticate their connections to HEC. When the clients connect, they present this token value. If HEC receives a valid token, it accepts the connection and the client can deliver its payload of application events in either text or JSON format.

Splunk Log

2.1.3 Sense-making

Splunk Enterprise is used to receive the data stream from the Raspberry Pi. The server validates the authentication token before allowing the connection to prevent unauthorised sources from sending garbage or malicious content to the server that is exposed to the internet. The data is indexed and stored on the server for further processing.

Splunk Enterprise provides search and evaluation functions on the data, dashboard visualisation capabilities for monitoring and reporting, and alert capabilities for events. It is selected so that the heavy lifting of programming the data ingestion, user interface and associated security and maintenance features is taken care by the platform. This allows the team to focus on the application of the sense-making and visualisation aspects.

The Splunk Enterprise is deployed on AWS Cloud using a single EC2 instance for demonstration. Further details about security and infrastructure management in Splunk that we have considered can be found in our detailed design consideration.

2.2 Design consideration

The over-arching design consideration is selecting the best things, connectivity and sense-making components to deliver a functional solution in the shortest time and at the lowest cost, while minimising the risk exposures. In deciding what is “best”, needs and good-to-haves are considered, conflicting requirements are weighed, and trade-offs are made to arrive at the solution. The guiding principles are to reduce complexities and use the minimum to achieve the desired outcomes. These are documented in our lean business case and detailed design considerations.

3. Insights from analysis

3.1 Data Processing

The data stream is piped into Splunk for parsing and information processing. For detailed explanation of how the required information is extracted from the raw string, please refer to CAT Detailed Data Processing.

Using our prototype setup, we noticed that there were drops in the QR code captures. For example, in the 1 sec interval, the Raspberry Pi captures only 3 QR codes for the seats even though there are 4 displayed on the table.

Three QR codes for the seats are detected even though all four QR codes are not covered

These drops cause very noticeable fluctuations, and we will not be able to tell accurately if the table is occupied. Hence, we decided to use a moving window method and sum the count of the QR codes for each seat and cleaner individually over a window period. We also apply a threshold accuracy value on the aggregated data to tune the sensitivity of the system to obtain the result closest to our setup scenario.

The following test scenarios were used to determine the window period and threshold accuracy:

  1. Seat position occupied to unoccupied and vice versa.
  2. Cleaner is present to not present and vice versa.

For test scenario 2, we expect the time for a cleaner to wipe down a table that do not have any tray or crockery to be at least 20 secs. If a simple sum was used to aggregate the Cleaner_ID, we would be restricted to half of 20secs to be able to detect the Cleaner’s presence over two windows. However using a moving window allows us to obtain the per second aggregate and gives us greater flexibility in tuning our system. When testing various combinations of window periods and threshold values, we found that an eleven second period is best.

For the QR codes of seat positions (S1-S4), our prototype setup can detect a QR code when it is present at least eight out of 11 times with ~94% accuracy.

Average accuracy of detection of all 4 seat positions

For details on how the accuracy is calculated and the Splunk code use, please refer to CAT Detailed Data Processing.

To measure cleaner accuracy, we used test scenario 2. Cleaner_Id QR code was more difficult to detect accurately than fixed seat positions due to the relatively shorter period of time and the movement of the cleaner. To improve the accuracy of detection, we used a second order moving average to detect the cleaner QR code with~90% accuracy.

Percentage accuracy of Cleaner_ID detected per hour

For full details about how we improved on the accuracy, please refer to CAT Detailed Data Processing.

From the aggregated data, we use the occupancy of the seat positions to determine if the table is occupied. The table is considered as occupied so long as one of the seat position is occupied (QR codes detected in each eleven seconds window falls below eight) and the table is not being cleaned. For example, Table001 has 4 seat positions S1, S2, S3 and S4. Table 001 is considered as occupied if there is at least one seat position occupied and if cleaner is not cleaning at the table.

We coded table occupancy as a logical variable “Occupancy” in Splunk as follows:

A table requires cleaning when a table becomes occupied and if a cleaner has not cleaned it yet, we coded table cleaning status as a logical field variable “flag” in Splunk as follows:

We will use “Occupancy” and “flag” in our dashboard.

3.2 Data Simulations for Management Dashboards

We used FlexSim to simulate the event of diners arriving at a table, having their meals and then leaving.

During off-peak hours, diners will arrive based on a Poisson distribution of 8 per hour, while peak hours would see 20 diners per hour. The time spent at each table follows the Normal distribution with a mean of 30 mins and standard deviation of 8.

FlexSim Simulation

The simulated data is extracted to a spreadsheet where we further created random cleaning events for each table that required cleaning. The data is then loaded into Splunk for dashboarding. This allowed us to quickly obtain data for a day, a week and a month, as a basis for our management dashboard.

Management dashboards for table and cleaner management were created on Splunk to analyse table occupancy, cleanliness and cleaner productivity over different periods.

  • A ‘Missed Cleaning’ event is triggered when the table status changes from “required cleaning” to “is occupied”, i.e. diners had sat at an uncleaned table.
Table Management Dashboard
Cleaner Management Dashboard for daily query, on hourly basis

3.3 Sense-making for stakeholders

Our data is channeled into a dashboard that displays a real time update of the table status at the food centre for the various stakeholders.

Cleaners

A “traffic light” colour system is used to indicate the status of each table in an intuitive way for the cleaners to quickly identify the tables that require cleaning. A combination of table occupancy and cleaner flag are used to derived the colours.

Traffic light colours showing the different table status

A red box indicates that the table is unoccupied and requires cleaning while the yellow box indicates that the table is currently occupied, and the green box indicates that the table is clean and ready for diners.

Without trays and crockery on the tables, it would be more difficult for cleaners to know which tables have been previously occupied and thus requires a wipe down. With our IoT solution, cleaners would be able to tell immediately which tables require cleaning and where these tables are located, reducing the need to pace the entire food centre to locate dirty tables or trying to recall where diners had last sat at.

Dashboard for Cleaners

We envision our dashboard for cleaners to be located at the crockery washing point of a food centre. Cleaners could be stationed there to wash the crockery, and if there are tables to be cleaned, the dashboard would alert them so that they can wipe down the necessary tables. In this way, cleaners can perform both functions of washing and wiping down tables more efficiently.

As cleaning hours are automatically tracked in our design, workers could be given rewards based on the actual hours of cleaning.

Cleaners at resting area waiting to be notified of cleaning requirements

Management

Now that tray and crockery returns are mandated, operators of food centres may be considering whether to reduce cleaners head count. To make this decision, management could utilise the table usage patterns on the dashboard to determine when the peak periods for cleaning are and where tables that are most heavily utilised are located. These insights allow managers to split the food centre into zones such that the cleaning load is evenly distributed.

Management dashboard

Further decisions about deployment of cleaners could then be made based on the volume of tables cleaned and workload to better handle peak period cleaning.

Management dashboard with Cleaners Weekly and Monthly overview (Daily view was shown earlier)

Additional analysis could be carried out on the “Missed Cleaning” table status. For example, management could set targets to lower the instances of missed cleaning to improve the cleanliness of the food centre. Understanding the reasons behind long missed cleaning duration could help cleaners be more effective. For example, how the change in shifts is affecting missed cleaning rate or if there are other routine duties that are affecting the cleaner’s ability to reach the dirty tables promptly.

In our FlexSim simulations, we found that during peak hours for lunch and dinner, there was little to no cleaning done, likely due to the continued occupancy of the tables by different groups of diners. There were also gaps between Cleaners’ throughput and target cleanliness that could be investigated for follow-up actions to improve cleanliness levels.

Diners

Our fully implemented IoT solution can enhance the dine in experience of diners. For example, the unpleasant experience of walking to a seemingly “clean” table only to realise that it has food stains on closer inspection would be reduced. Diners at food centres employing our IoT solution would dine with a peace of mind knowing that their table had been wiped down.

4. Learning and discussion points

4.1 Site Considerations

Deployment at the physical site is always associated with uncertainties due to factors far from ideal.

Mounting the Raspberry Pi at the ideal location is a challenge. At a food centre, the distance may vary from 3m to 6m, depending on the ceiling structures of the site. This can be addressed from the use of optical zoom lens, at the expense of higher equipment costs and calibration.

Raspberry Pi High Quality Camera with Zoom Lens

To reduce maintenance requirements, fixed utility power is preferred over battery. As we scale up the devices, N-way USB power adapter can be used to supply power to the multiple Raspberry Pi concurrently, at the expense of longer micro-USB cable runs and slightly lower resiliency as a single power adapter failure will affect more Raspberry Pis. Power-over-ethernet (POE) is not considered due to the more expensive POE HAT , UTP cabling and active POE switch equipment requirements.

In practice, a 2-wire 5V power line instead of micro-USB can be plugged into the Raspberry Pi using Pin1 and Pin3 of the U9 header to provide power to multiple devices at a lower cost.

Raspberry Pi U9 header pin layout

Although the Pi Camera performs well in low-light situations, during an overcast day and when the lights has not been turned on, image capture quality could be affected. To mitigate this, we can use equip the Raspberry Pi with Infra-red LEDs and Raspberry Pi Camera Module 2 NoIR, again at higher costs.

Pi Camera with Infra-red sensor LED for night vision

4.2 Keeping Things Simple

The choice of sensors and embedded devices is very important. Use a camera sensor to detect QR codes for both the tables and cleaners allows us to simplify our solution and focus on improving the data collection. Existing Python packages for image capture, QR code detection and data transmission were used for reliability.

4.3 Sensor Modality Alternatives

For our project, using QR codes meant having to paste the QR codes on tables and cleaner’s apparels. Object detection could have been used, but it would require a way to uniquely identify cleaners for management analysis.

In an earlier project on monitoring tray returns, Team Hardcode sent their images to the cloud for processing (method 2). The cons highlighted by Team Hardcode are the need for high data throughput required, high ML processing costs and privacy concerns. With increasing computing power and memory in increasingly smaller devices, parallel computing and libraries for AI and ML, there is the possibility of using image classification at the edge to detect persons, trays and crockeries using trained models to infer table occupancy and cleaning in our project. Edge processing will lower the data costs of sending images to a central server for analysis. Image processing remove the need to deploy QR codes at the tables and apparels, at the expense of higher costs for the devices.

Image classification — people detection

4.4 Maximising Sensor Coverage for Economic Scalability

With the current sensor resolution and computing powers, it is possible to detect up to 9 QR codes and deployments across tables that are side-by-side with no in-between seats are possible using just one Raspberry Pi.

However, the challenge is in identifying the table that the cleaner had cleaned. Currently, cleaner detection is tied to the single table that is captured by the device. This could possibly be solved through image classification by partitioning the area and associating the detection with spatial awareness.

Detection across tables

4.5 Immediate Occupancy during Peak Hours

The system cannot detect changes in different groups of diners at a table. This could happen during peak hours, where groups of diners are waiting for tables, and would sit down at the dirty tables quickly to ‘chope’ the table. In such cases, the table status may not change to indicate that cleaning is required, but diners may signal for the cleaners to clean the table. In the management dashboard, cleaning activities during peak hours that are not coupled with table status that require cleaning, may be an indicator that the food centre is crowded.

4.6 Robotic Table Cleaning

At some food courts, robots have been deployed to act as tray collection points, bringing the tray return nearer to diners. With our solution providing table occupancy and cleanliness information, cleaning robots could be deployed for targeted table cleaning, further reducing the reliance on human labour.

Smart tray return robot at Koufu, Toa Payoh Hub

References

Python scripts related:

After-thoughts

This project has brought much anticipation, anxieties, frustrations, delight and satisfaction with each small victories we made along the way. It has given us a good appreciation of the complexities of IoT and the opportunities it presents for the future. We are appreciative of the guidance provided and experiences shared by the course instructors, and the camaraderie shown by the team members. Thank you for the wonderful times!

--

--

is614
is614

No responses yet