Skip to content
This repository has been archived by the owner on Jul 31, 2022. It is now read-only.

Commit

Permalink
Examiner 1 corrections
Browse files Browse the repository at this point in the history
  • Loading branch information
atyndall committed Jun 19, 2015
1 parent dd00cba commit dbba277
Show file tree
Hide file tree
Showing 12 changed files with 22 additions and 22 deletions.
Binary file modified conclusion/conclusion.pdf
Binary file not shown.
4 changes: 2 additions & 2 deletions conclusion/conclusion.tex
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ \section{Future Work}
\subsection{Broader Data Collection}
Classification dataset collection was constrained to one set of ten experiments. Each of these experiments had the sensing system recording at the same height and the same angle. This data did contain some elements of variability, such as both sitting and standing occupants. However, further exploration of how the results differ based on the sensor's viewing angle or distance from the ground would provide valuable information.

Priorities for new experiments include exploring more than three people, investigating classification using classes that encompass ranges of people (e.g. ``1-2 persons'', ``2-4 persons'', ``4-8 persons'', ``8+ Person''), and investigating how placing the sensing system at different angles affects the accuracy of the collected data.
Priorities for new experiments include exploring more than three people, investigating classification using classes that encompass ranges of people (e.g. ``1-2 persons'', ``2-4 persons'', ``4-8 persons'', ``8+ Person''), and investigating how placing the sensing system at different angles affects the accuracy of the collected data. Additionally, investigating the limits of the sensor's ability to distinguish large numbers of people with the limited number of pixels available.

\subsection{Different Feature Vectors}
Exploring how different subsets of the three current features, or possibly new features derived from the thermal capture, affect the accuracy of the machine learning algorithms may demonstrate interesting results. We believe that experimenting with features that represent the abstract ``shape'', or ``roundness'' of connected components is a particularly promising area of research.
Expand All @@ -69,7 +69,7 @@ \subsection{Improving Robustness}
\subsection{Field-Of-View Modifications}
Several different techniques could be used to improve upon the field-of-view limitations of the \mlx, and exploring them and their cost/complexity implications would be useful. The first of these is applying a lens to the sensor, effectively expanding the field-of-view, but at the cost of distorting the image. Compensating for this distortion while maintaining accuracy presents an intriguing problem.

In another direction, using a motor with the \mlx to ``sweep'' a room, and thereby constructing a larger image of the space could also resolve the field-of-view issues. However, this approach also presents problems in stitching the images together in a sensible way. Problems include the lens distortion caused by rotating the sensor and cases where a fast-moving object may be represented multiple times in a stitched capture.
In another direction, using a motor with the \mlx to ``sweep'' a room, and thereby constructing a larger image of the space could also resolve the field-of-view issues. However, this approach also presents problems in stitching the images together in a sensible way. Problems include the lens distortion caused by rotating the sensor, potential thermal distortions caused by the motor, increased energy consumption, and cases where a fast-moving object may be represented multiple times in a stitched capture.

\subsection{New Sensors}
During this project, an updated version of our sensor, the MLX90621, was released. This version (at a similar price point) doubles the field-of-view in both the horizontal and vertical directions, addressing many of the problems encountered with the size of detection area in low-ceiling rooms. This version offers nearly complete backwards compatibility with the MLX90620. Updating the project code-base to support it and re-running the experiments with the increased field-of-view to determine how much of an improvement it is would be interesting.
Expand Down
Binary file modified design/design.pdf
Binary file not shown.
8 changes: 4 additions & 4 deletions design/design.tex
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@

\section{Hardware}

As reliability and future extensibility are core concerns of the project, a three-tiered system is employed with regards to the hardware involved in the system (\Fref{tab:sensor:tiers}). At the bottom, the ``Sensing Tier,'' we have the sensors themselves. Connected to the sensors via their respective protocols is the ``Preprocessing Tier,'' hosted on an embedded system. The embedded device polls the data from these sensors, performs necessary calculations to turn the raw sensor information into actionable data, and communicates this via Serial over USB to the third tier. The third tier, the ``Analysis Tier,'' is run on a fully fledged computer. In our prototype, it captures and stores temperature and motion data it receives over Serial over USB, as well as visual data for ground truth purposes.
As reliability and future extensibility are core concerns of the project, a three-tiered system was employed with regards to the hardware involved in the system (\Fref{tab:sensor:tiers}). At the bottom, the ``Sensing Tier,'' we have the sensors themselves. Connected to the sensors via their respective protocols is the ``Preprocessing Tier,'' hosted on an embedded system. The embedded device polls the data from these sensors, performs necessary calculations to turn the raw sensor information into actionable data, and communicates this via Serial over USB to the third tier. The third tier, the ``Analysis Tier,'' is run on a fully fledged computer. In our prototype, it captures and stores temperature and motion data it receives over Serial over USB, as well as visual data for ground truth purposes.

In the current prototype, the Analysis Tier merely stores captured data for offline analysis. In future prototypes this analysis can be done live and served to interested parties over a RESTful API. In the current prototype, the Analysis and Sensing Tiers are connected by Serial over USB, in future prototypes, this can be replaced by a wireless mesh network, with many Preprocessing/Sensing Tier nodes communicating with one Analysis Tier node.
In the current prototype, the Analysis Tier merely stores captured data for offline analysis. In future prototypes this analysis can be done live and served to interested and appropriately authenticated parties over a secure RESTful API. In the current prototype, the Analysis and Sensing Tiers are connected by Serial over USB, in future prototypes, this can be replaced by a wireless mesh network, with many Preprocessing/Sensing Tier nodes communicating with one Analysis Tier node.

\begin{table}
\centering
Expand Down Expand Up @@ -183,7 +183,7 @@ \section{Software}

At the Preprocessing Tier, the Arduino, the \tarl \mlx driver is found, which is written in the default Arduino C++ derivative language. The use of a low-level language is important at this tier as careful management of memory usage and processing time is required in such a resource-constrained environment.

At the Analysis Tier, a general purpose computer is used, and this is where the bulk of \tarl can be found. As the processing environment is less constrained, a choice of language becomes a possibility. In this instance, Python was chosen as \tarl's language on the Analysis Tier. Python was chosen as it is a high-level language with excellent library support for the functions required of the Analysis Tier, including serial interfacing, the use of the Raspberry Pi's built in camera, and image analysis. The 2.x branch of Python was chosen over the 3.x branch, despite its age, due a greater maturity in support for several key graphical interface libraries. The core of the Analysis Tier's code is based upon the algorithm detailed by the ThermoSense paper, which provide an overview of here.
At the Analysis Tier, a general purpose computer is used, and this is where the bulk of \tarl can be found. As the processing environment is less constrained, a choice of language becomes a possibility. In this instance, Python was chosen as \tarl's language on the Analysis Tier. Python was chosen as it is a high-level language with excellent library support for the functions required of the Analysis Tier, including serial interfacing, the use of the Raspberry Pi's built in camera, and image analysis. The 2.x branch of Python was chosen over the 3.x branch, despite its age, due a greater maturity in support for several key libraries. The core of the Analysis Tier's code is based upon the algorithm detailed by the ThermoSense paper, which we provide an overview of here.

\subsection{ThermoSense Implementation}
\label{subsec:thermosenseimplementation}
Expand Down Expand Up @@ -264,7 +264,7 @@ \subsubsection*{\texttt{pxdisplay} functions}
The class also provides a set of functions to set a ``hottest'' and ``coldest'' temperature and have RGB colours assigned from red to green to blue for each temperature value that falls between those two extremes.

\subsubsection*{\texttt{Visualizer} class}
The \texttt{Visualizer} class is the natural compliment to the \texttt{Manager} series of classes. The functions contained within can be provided with a Queue object (generated by a \texttt{Manager} class) and can perform a variety of visualisation and storage functions.
The \texttt{Visualizer} class is the natural complement to the \texttt{Manager} series of classes. The functions contained within can be provided with a Queue object (generated by a \texttt{Manager} class) and can perform a variety of visualisation and storage functions.

From the recording side, the \texttt{Visualizer} class can ``record'' a thermal capture by saving the motion and thermal information to a simple \texttt{.tcap} file, which stores the sample rate, timings, thermal and motion data from a capture in a simple, plain-text format. The class can also read these files back into the data structures \texttt{Visualizer} uses internally to store data. If \texttt{Visualizer} is running on a Raspberry Pi, it can also leverage the \texttt{picamera} library and the \texttt{OnDemandManager} class to synchronously capture both visual and thermal data for the purposes of ground truth verification.

Expand Down
Binary file modified evaluation/evaluation.pdf
Binary file not shown.
Loading

0 comments on commit dbba277

Please sign in to comment.