We therefore welcome papers that explain how algorithms can be redesigned to accommodate constraints imposed by the pre-existing software, hardware and interfaces that pervade real-world systems. We do recognise the value of research that focuses on a simulated setting with clean interfaces to raw data and can be tackled using a state-of-the-art GPU cluster running the latest version of a compiler. However, our interest is in understanding how to achieve similar performance when adhering to pre-existing software interfaces and using legacy hardware, neither of which was designed with AI in mind.
Similarly, while we recognise the value of testing against reference datasets and using general-purpose well-established metrics to quantify performance (eg F-score and RMS errors), we are also interested in papers that describe experiments and/or trials of using AI in the real-world such that the ease of adoption and resulting operational benefit of the AI are the metrics for success. 
Convincing people with strongly held irrational beliefs related to AI.
[/SM edit]
Challenges in Deploying AI in the real world [James]
We are facing a global reproducibility crisis in science and research. Artificial intelligence is no different, in part due to the complexity and ever increasing scale of training and testing data, however, we still have many lessons to learn \cite{research}
As a community, we need to tackle this dangerous lack of transparency head on with real world computational methods that can not only be reproducibly deployed, but also shown to work on real data, at scale.  
A fundamental premise of Applied Artificial Intelligence Letters is that we feature scientifically significant articles about the actual “application” of modern AI technologies. It is also our understanding that if these applications can’t also be deployed at scale, on modern elastic computational and data infrastructure to solve global sized challenges, then they also do not pass our “litmus test” for having actually been deployed at scale. It is critical that new methods can actually work and scale to run on global sized data challenges, scale must be “baked in” to any new AI algorithms.
Both application of scale and the transparency of methods will need to be identified and described fully in each and every AAIL article accepted for publication. Modern HPC centers have invested significantly in both training \cite{computing} and providing easy access to scale out computing at national centers or more often in partnership with many major hyperscalers to allow researchers to test and prove out their methods at scale on real world, actionable data. 
Modern startups and our global technology industries have long understood that access to large scale in-silico infrastructure is the key business differentiator in many verticals. It is our hope that by combining the detailed work of the academy in concert with industrial scaling techniques and by also partnering on articles together, that AAIL publications will be able to further highlight the critical nature of technology transfer in artificial intelligence from universities and into industry, and will subsequently make for extremely compelling reading and solid reference material for everyone.
[1] https://arxiv.org/abs/2003.00898
[2] http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1001745