As a data scientist who have worked on geospatial data for more than one year, traffic prediction has always been a great challenge for our team. We use machine learning algorithm for traffic estimation and a navigation system based on our live traffic estimated data. We have built a simple traffic estimation prediction that is used to predict navigation travel time. The output of our services is surprisingly accurate. On top of that, our travel time prediction have shown brilliant results, except in anomalies like vacation days or unpredictable changes in traffic behaviors. These anomalies happen less than 20 percent of the time but our users need more accurate prediction especially in anomalies moments, so here I will talk about the approach we took to solve the problem.
There are many kinds of solutions for traffic prediction like ARIMA, SARIMA, Kalman filter, particle filter and theoretical traffic propagation methods. But in the real world, when you must respond to your users in less than half a second and you have a huge amount of dirty/missed/imbalanced data, employing these methods does not seem to be working.
So, here is what we did to predict traffic:
As an initial idea, I proposed to use a simple linear algebra-based method to find the most similar traffic state to live traffic. Suppose we found the most similar state to current state. We assume that 20 minutes later from now, the traffic should be similar to 20 minutes after the state we found as similar. This method had a bunch of errors. For example, when you use a dot product, you can not avoid the effect of an accident in the north-west of a city on traffic state of south-east in predicting similar states.
Then we used a method to do a local search in order to find the most similar traffic state in our traffic history. The whole idea is very simple. Cut the cities map into some pieces. For example, we used square cells with 200m * 200m dimensions. Then we defined a mesh which consists of 5*5 cells, so its area was 1Km². Each mesh had 9 collections of 3*3 cells as illustrated in the following figure.
The traffic in this local area will be predicted by its own data. The whole amount of data is huge, on the other hand we face totally missed and imbalanced data in smaller pieces. What we needed here was feature engineering on our data. I defined some features for all traffic data in each cell and did some matrix operations on each mesh and evaluated the live state of mesh with its historical states. Finally, I found the best method to find the most similar state is to consider a 25-dimensional space, so that each mesh is a 25-element vector. Now we can calculate dot product between each historical states with live traffic state and find the most similar state very fast by use of cosine similarity. After finding the similar state, in next step we tested our initial idea. We examined if we can consider 20 minutes after that state as a prediction of 20 minutes later traffic from now on. The answer is yes. The predictions are amazingly fast and accurate. Another advantage of this method is the fact that it can predict traffic in a database layer via SQL queries.
How to get rid of network`s issues that cause an error in traffic predictions? Here we go to the last part of our traffic prediction journey. When we are working on open street map network graph, it always has issues like mistaken dead ends and some reverse directions. So when you are predicting traffic on this network the results may be incorrect. Assume each state of traffic as an image. We can take an image from each mesh by opening our lens for 10 minutes and then take a shot. Next idea is to create a HOG from this images. HOG is a very simple representation that captures the basic structure of a face in a simple way. For more information on HOG please read this post.