Twenty-five years ago, I wrote a paper (Hensher and Ton (2000)) to compare machine learning algorithms associated with neural networks with the behaviourally more appealing (less black boxy) discrete choice models. At the time, the curiosity was with how much better or worse machine learning with its training algorithms could improve on the predictive performance of a simple multinomial logit choice model. At that time, we were unaware of the pending explosion of interest in machine learning as megabytes of data became available, and what is now known as artificial intelligence (AI) and generative-AI (G-AI). G-AI models use neural networks to identify the patterns and structures within existing data to generate new and original content, something we did for many years under the name of classification and regression trees (CARTS; Breiman et al. 1984)2, albeit with smaller data sets. One of the innovations with G-AI models is the ability to leverage different learning approaches, including unsupervised or semi-supervised learning for training. In discussing the explosion of interest in AI and G-AI, I want to be clear that I am not on a warpath but rather a search for the ‘available opportunity’.
I thought I would revisit the meaning of ‘artificial’ and of ‘intelligence’ and see if we are describing this new tool appropriately. According to the Cambridge dictionary3, the word artificial refers to something that is ‘made by people, often as a copy of something natural.’ The word intelligence refers to the ability to think, to learn from experience, to solve problems, and to adapt to new situations; although in recent times it has been interpreted as involving mental abilities such as logic, reasoning, problem-solving, and planning.
While AI can claim to be aligned quite well with these meanings, there is one thematic that remains concerning, namely ‘to adapt to new situations.’ One wonders whether this is only possible where the new situation is a small variation around current or past behaviour, given that G-AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics (my emphasis) and possibly repetitive in nature. Take a future where, for example, we have 100% electric cars, and active travel and micro-mobility are a dominant transport mode, which is a non-marginal variation on the past and today. How well can AI or G-AI predict this circumstance (in contrast to a human-devised scenario unless this is already in the available data, and which might also be questionable) based on trolling the existing data bases and rules on offer? If expectations of the future are widely divergent, however, one might consider ‘tuning’ AI (over a mass of data) to discern the most probable future at an earlier stage than might be achieved by any other methodology. One appealing ingredient (found in the trolling exercise if in the public domain) could be the many studies that have undertaken a form of stated preference study to explore behavioural intentions under future scenarios that test for 100% electric cars, and active travel and micro-mobility. One doubts, however, whether this is enough to give us confidence in the future circumstance?4 Indeed, a great deal of potentially very useful data (e.g., unit records on individuals’ travel behaviour) is never released into the public domain5. Such ‘hidden from AI captured’ data is indeed what transport planners should be using in informing now, the near future, and possibly the distant future. Sadly, in a situation where there is high divergence across the likely future scenarios, while G-AI is likely to be of little use, that limitation applies to most other forecasting methods as well.
An important question is: ‘can we’/’how can we’ derive benefit from this new, AI tool? Being optimistic, if you applied AI to a mass of behavioural observation data, you might be able to detect patterns of decision making that would rival, or even surpass other analytical analysis. Once discerned, those patterns of decision making can be fed back into policy sphere and so nudge the process a little bit to the left or right hopefully towards a better societal outcome.
We have, I believe, a real dilemma, described brilliantly by Anable and Goodwin (2021) in the context of de-carbonising transport where they see it like shot silk. The warp (blue) relates to still being able to use our cars, because they will be electric; we will still be able to fly away on holiday, using non-carbon fuel, and technology will give us a timely transition. The weft (green) is the potential for significant traffic reduction including a substantial mode shift to walking, cycling and public transport, increasing car occupancy overall, and embedding transport de-carbonisation principles in spatial planning to ensure that new development promotes sustainable travel choices. The challenge, however, is that only one colour is typically seen, depending on where the viewer is standing. This behavioural positioning seems to me to be a problem for G-AI since the data in place and training tools may be challenged beyond the ability to take on board this situation and do anything materially useful with it.
So, the real question becomes – ‘how much can we depend on the outputs of AI and G-AI to guide us in making decisions on our future’ and replace traditional sources of data such as household surveys? There has been a lot of scope creep, particularly with AI attempting to get into more behavioural areas of research, when it works best more in the automation and perhaps non-behavioural aspects of performance6.
It appears to me that this is equivalent to the view of experienced transport modellers that ‘models are a useful guide to contribute to the debate that ultimately will be dependent on many other soft as well as political factors.’ Time will tell, but I suspect we (or at least many) are at the ‘love affair’ phase with AI, and in time, it will be placed in context as a useful but not so dominant part of the puzzle on life. Could G-AI then become nothing more than a source of information ambiguity and/or an intelligent agnostic for strategic transport planning and policy decision making?
Anable, J and Goodwin, P (2021) Two Futures: Transport Policy, Planning and Appraisal for the New Climate Reality. (Forthcoming).
Breiman, L., Friedman, J.H., Olshen, R.H. and Stone, C.H. (1984) Classification and Regression Trees, Chapman and Hall/CRC, imprint of Taylor and Francis Group, Baton and Roca.
Hensher, D.A. and Ton, T. (2000) A predictive assessment of neural networks and discrete choice methods. (Presented at 8th WCTR, Antwerp, July 1998) Transportation Research Part E, 36 (3), September, 155-172.
The focus of this opinions piece is on strategic travel model systems. If you have a connected autonomous vehicle (CAV), then AI inputs make better sense. Discussions and comments by Ian Christenson, CEO iMOVE, are appreciated.
Use of trees in regression dates back to the AID (Automatic Interaction Detection) program developed at the Institute for Social Research, University of Michigan, by Morgan and Sonquist in the early 1960s. The ancestor classification program is THAID, developed at the institute in the early 1970s by Morgan and Messenger.
We have had several episodes recently when the future initially appears to comprise binary options, but subsequent reality displays a much more nuanced and combinatorial approach. Examples include AV’s will take over the world, office work is dead, net zero by ‘20anything’, and domination or demise of private passenger vehicles. I thank Ian Christensen for this insight.