I've been wondering about this ever since I first saw Tesla's Investor Autonomy Day presentation.
There seems to be quite a few different hints about what they are most likely doing through some of Andrej Kaparthy's Q&As and presos. George Hotz's (Comma.ia and first iPhone jailbreaker) perspective is also interesting since he's a building a competing autopilot system using similar principles. (interesting note: he almost got the contract to write AP2.0)
Goerge Hotz believes Tesla's lane change is always the same lane change and quite basic. He believes they (Kaparthy) can/will do much better after the May 2019 autopilot team restructuring. He has some interesting insights on the level of effort of full self driving.
Although I also believe Tesla are primarily using the NN for perception, his engineers (Kaparthy, Stuart (ex-employee now)) have also mentioned that they are using the NN to fine tune parameters for their control algorithms.
I would assume at a high level, the "stack 1.0" control algorithm must be something similar to a classic control loop.
You have a target command and you convert that into an output command in order to achieve that target over time with a certain behavior and adapt based on the system's feedback. I could imagine at a basic level using something like a PID to control a lateral/longitudinal velocity vector and maybe another control loop which would output the target vector in order to trace your path.
Factors from the perception NN would influence this target vector. What I could imagine is perhaps fine tuning the control agorithms over time using the NN.
In order to hear it from the lion's mouth, I think Elon's response at this timestamp [3:34:5] explains the current NN utilisation.