From 4080160d3abcc84ac2e31fc281b2e994450d86db Mon Sep 17 00:00:00 2001 From: David Rubin Date: Thu, 21 Mar 2024 20:05:33 -0700 Subject: [PATCH] format the paper a bit --- Paper/main.tex | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/Paper/main.tex b/Paper/main.tex index 6e578c2a2..c4ad73e75 100644 --- a/Paper/main.tex +++ b/Paper/main.tex @@ -22,11 +22,11 @@ \maketitle \begin{abstract} - TODO: abstract + TODO: abstract \end{abstract} \section{Rationale} - Navigation over uncharted terrain has always required precise instruments +Navigation over uncharted terrain has always required precise instruments and careful data collection. Thanks to modern technology, such as satellite systems and GPS, no area is left completely undocumented \citep{deschamps-berger2020, kervyn2007, li1988}. Those who want to autonomously explore areas are left with two options: @@ -36,7 +36,7 @@ \section{Rationale} This preexisting data, while it cannot be relied on, can be useful for navigation. - A pathfinding situation consists of a “predicted” map, which is available +A pathfinding situation consists of a “predicted” map, which is available immediately and a “ground-truthed”, “observed” or “actual” map which is revealed to the algorithm or model as it explores the map. These two maps have a variation between them, the degree of which is referred to as the @@ -49,19 +49,19 @@ \section{Rationale} nature of the D* algorithm and the machine learning model, the chosen path will most often not be the best path. - The most popular open-loop pathfinding algorithm is A* (pronounced “a-star”) +The most popular open-loop pathfinding algorithm is A* (pronounced “a-star”) uses node based weighted pathfinding and prioritizes pathfinding in directions that appear to be better. D* is an adaptation of A* for dynamic environments using an “incremental search strategy” when the final environment cannot be determined (X. Sun et al., 2010). - Many different techniques for machine learning +Many different techniques for machine learning (ML) exist (Amit Patel, 2022). For this application, reinforcement learning is best as it has shown success with pathfinding (Roy et al., 2017) and is able to make a choice between exploration and using past data which is necessary for using a combination of both maps. - This research will describe the +This research will describe the feasibility and limits of the use of a ML model for navigation of real autonomous vehicles compared to traditional (A* and D*) pathfinding algorithms. @@ -69,4 +69,4 @@ \section{Rationale} \bibliographystyle{plainnat} \bibliography{references} -\end{document} +\end{document} \ No newline at end of file