For all the attention and discussion, you could be forgiven for thinking that artificial intelligence (AI) is fundamentally changing every industry and sector. Though for the more cynical (and with reference to the Gartner Hype Cycle) you may feel that a lack of demonstrable applications of AI (and the associated machine learning and deep learning) points to either a Peak of inflated expectations or even the Trough of disillusionment. In reality, of course, progress differs from industry to industry and from application to application. In some fields – notably healthcare and specifically cancer detection – AI is already having a significant positive impact. In other areas, progress is steadier. Video surveillance is one of them.
In our industry today, machine or deep-learning is mostly used for video analytics, but we expect the technology will be an important component in many different applications and products in the future. Over time it will become a common tool for software engineers and will be included in many different environments and devices. But, again, its application will be driven by the most compelling use cases, not by the technology itself. There is a temptation in the surveillance and security sector to over-promise in relation to new technologies. This has been true of AI in video analytics and, particularly, in some of the claims made around the current application of deep learning. With AI and deep learning, as with any new technology, we’re committed to making sure its implementation is robust, reliable and addresses real customer challenges.
Deep learning consists of two different phases: the training phase and the execution phase. The former requires a lot of processing power, data and time, so most likely will be run on a server and/or in the cloud, while additional training (fine tuning) could be done at the edge (which is a neat link into our next trend). The execution phase – that which requires ‘trained’ data to work – can be done at any level within the system, purely dependent on how much processing power is required and how time-critical the application is.
Research and progress will continue, steadily, and bring incremental improvements and benefits over the next year rather than radical change.
Cloud and edge computing
If AI could still be said to be in the earlier stages of the Gartner Hype Cycle, it’s difficult to argue that cloud computing is anything other than firmly established and heading towards, if not already on, the Plateau of productivity. There can be few organizations in the private or public spheres that aren’t making use of cloud computing at some level, and many have moved their entire infrastructures to a cloud-based model.
That said, cloud computing is based on the centralized computing in one or many data center, and as the proliferation of connected, Internet of Things (IoT) devices grows exponentially, so does the amount of data produced. Even as more data centers with ever-increased capacity are created, this tsunami of data could become overwhelming. This can be particularly critical in areas such as video surveillance, where despite the development of technologies designed to reduce storage and bandwidth needs, data demands are still significant.
This is where the benefits of edge computing come to the fore. In simple terms, as its name suggest, edge computing puts more data processing at the ‘edge’ of the network, close to where the data is collected by the sensor and before transfer to the data center. One particular benefit in some sectors relates to speed of processing and ability to act upon the data captured. Take, for instance, an autonomous vehicle. Without edge computing – where both data capture and processing take place in the vehicle itself – the delay in communication with a cloud-based data center, even if only milliseconds, might be the difference between the vehicle avoiding an accident or otherwise.
In our business, edge computing means processing data within the camera itself. While perhaps not as dramatic as avoiding road accidents, the benefits can still be significant. Firstly, initial processing of data within the camera can significantly reduce the bandwidth demands of both data transfer and storage. Additionally, data can be anonymized and encrypted before it is transferred, addressing security and privacy concerns. Ultimately, cloud and edge computing will not be an ‘either or’ decision; the two will work in balance to the greatest benefit.
Personalization vs privacy
In years to come, 2018 might be considered as the year when broad awareness of data privacy reached its highest point particularly that associated with personal information. To those in the public and private sectors, the EU’s General Data Protection Regulation (GDPR) bought a higher level of scrutiny than ever before to how organizations collect, store, share and use personal information (including that captured by video surveillance). To the broader consumer, however, it is more likely to be issues relating to Facebook’s use of data which has heightened awareness and concern regarding what happens to the personal data given away online.
Ultimately, we live in a world where we have been given valuable online services in exchange for knowingly or unconsciously handing over a significant amount of personal data. Indeed, this data is used by the likes of Facebook, Amazon, Google and others to increase the value of these services through a high degree of personalization. To many, however, it feels like a line has been crossed between useful personalization and invasion of privacy, and the rumors that home voice assistants listen in to domestic conversations will only cause this unease to increase.
Ultimately, the trust between an organization and its customers is becoming an increasingly important and tangible asset. Indeed, recent research from consulting firm Accenture has established a correlation between stakeholder trust and revenue. Concerns about a company’s approach to privacy and the use of personal data will be one of the most impactful aspects of trust in business moving forwards.
Can something continue to be a ‘trend’ when it appears every year, and is a constant concern? Whatever your answer to that question, it’s impossible to think about issues that will affect every sector this year without a mention of cybersecurity. Indeed, in relation to the previous point, the fastest way to damage trust between a company and its customers (and shareholders) is through a cybersecurity breach. Just ask British Airways.
Cybersecurity will never be solved, because the cybercriminals (and increasingly nation states) will never stop trying to find and exploit vulnerabilities. These organizations are incredibly well-funded and organized and can innovate much more quickly than companies that need to adhere to industry regulations. Attacks are becoming more sophisticated, at a time when the number of connected devices mean that potential vulnerabilities and insecure network end-points are growing exponentially.
One particular area of vulnerability that has been highlighted recently is in the supply chain, where either a lack of good cybersecurity practice or even deliberately malicious actions can result in cybersecurity breaches being achieved through both software and hardware. The provenance of products is ever more critical than ever, with manufacturers needing to be confident that every link in their supply chain is a secure as it should be.
Smart technology to deliver environmental benefits
We’ve already seen how video analytics can be used as an operational planning tool by organizations looking to improve energy efficiency within offices, with the subsequent positive benefits for the environment. But new types of sensors can more accurately measure environmental impact across an organization’s sites, effectively acting as highly sensitive artificial ‘noses’ calibrated to different forms of output, and thermal imaging can be used to pinpoint areas of energy wastage.
For instance, one critical area where such sensors can heighten awareness, understanding and, increasingly allow for remedial action is in air quality. Whether inside buildings or in the external urban environment, the negative impacts on health and associated costs are becoming an ever-greater issue. Smart sensors will have a central role to play in addressing the problem globally.
Such applications add value to organizations through efficiencies and cost savings (and, hopefully, health benefits), but also help them reach their own environmental and sustainability goals.
Sensor integration driving smart actions
In themselves, individual sensors such as those described above can deliver significant benefits. But a final trend that we’re confident will be increasingly prevalent in 2019 will be combining and integrating sensors to prompt ‘smart’ actions.
For instance, in a smart city, a motion sensor connected to a barrier could trigger a camera which, in turn, would trigger an alert in the operations center, allowing for rapid and appropriate response. Or an environmental sensor could again trigger a video or thermal camera to quickly identify fires or spillages, again prompting alerts which will create a more rapid and effective response. When the range of sensors are considered – from thermal to motion, from atmospheric to video – the ways in which they could be combined are endless, as are the potential benefits of doing so.
Technology continues to develop at a rapid and accelerating pace. While it can be easy to become distracted by the potential of every new trend or innovation, each must be considered in relation to the use cases that are going to deliver maximum positive impact and value to organizations and citizens. This remains the lens through which we view technology trends and their application, and 2019 promises to be another exciting year in bringing new technologies to market in increasingly useful ways.
Authored by Johan Paulsson, Chief Technology Officer, Axis Communications