top of page
Search
Writer's pictureService Ventures Team

Key Enterprise Technology Trends to Watch in 2023

Updated: Feb 16, 2023



"Big things have small beginnings” may be a good way of characterizing the coming 2023 in B2B IT. We are B2B IT investors and believe that Enterprise technologies that are foundational to the future will graduate from thought leadership territory and enter the commercial marketplace at various maturity. At Service Ventures, we spun our contacts for the smartest and best-placed people we know to offer their views on key Enterprise Technology trends for 2023 and beyond. We connected with Founders, Tech execs, peer VCs, research scholars, and Industry experts to speculate on the coming year within their field of interest. We collected several key convictions for 2023 and we believe these diverse views on various technologies offer a composite look at the things we’re likely to see taking shape in B2B Enterprise IT in 2023 and beyond.


Multi Cloud IT Infrastructure

Industry insiders believe that enterprises will use a mix of dedicated private clouds, public clouds, and legacy platforms to manage their infrastructure needs. While many larger and older organizations continue to use legacy systems for their needs, more and more are moving processes or almost completely to the cloud. In early 2019, Deutsche Bank decided to accelerate its transition to the public cloud by turning to Google Cloud. Recently Capital One closed its last physical data center, making it the first U.S. bank to operate fully on the public cloud when it transitioned entirely to Amazon Web Services. In 2023, we may see more enterprises adopting Hybrid Cloud services, and they’ll likely work with many at once for different business objectives.


IT Infrastructure & Operation (IO) Automation

Traditionally automation in Enterprise IT has been most mature in the context of SW development (DevOps), where CI/CD tools automate many of the processes required to write, build, test, and deploy SW. To the extent that IT infrastructure operations teams have leveraged automation, it has typically been in the context of infrastructure provisioning. Industry practitioners expect a broader use of automation across all aspects of IT operations - from cloud setup and management to network configuration, to security and compliance enforcement and beyond. Automation in these contexts will help IO team complete complex processes much faster, while also reducing the risk of inconsistent configurations. Part of the reason we may see broader use of automation is that enterprises are increasingly taking advantage of tools with premade automation libraries to configure the scripts that power automated workflows. Instead of having to set up automations by hand and from scratch, smart IT teams are using preconfigured tooling that dramatically reduces the time and effort to deploy automations across all facets of enterprise IT operations.


Edge AI

Edge AI is the use of AI techniques embedded in IoT endpoints, gateways, and other edge computing devices. Use cases range from autonomous vehicles to streaming analytics. Experts believe Edge AI will have a very high impact because of its potential to disrupt numerous use cases across many industries. AI embedded in IoT endpoints is becoming a leading revenue opportunity for service providers. IoT endpoint runs AI models to interpret captured data and drive the endpoint’s functions such as automation and actuation without manual intervention. AI model could be trained/updated in a central cloud and deployed to the IoT endpoint. New classes of industrial heavy equipment could function autonomously within environments such as mining and agriculture, with local resources on board the asset. Edge AI is a catalyst for the adoption of general IoT solutions because of its ability to reduce solution costs. In 2023, asset heavy industries might see deployments of edge solutions with more AI components.


Private 5G Networks

A private 5G network is based on 3GPP technology and spectrums to provide connectivity, optimized services, and security for a specific enterprise. It is installed and managed by a technology vendor, a systems integrator (SI) or the end user for the express use of a single, unique entity. Current deployments are mostly proofs of concept with a focus on a limited but growing set of use cases. These use cases typically have a requirement for full, reliable network coverage of buildings, machines, sensors, and equipment, including indoor, outdoor, office and industrial areas with better performance than Wi-Fi. Network experts believe that the business impact of private 5G is high because the use cases apply across all industries and enterprises (regardless of size) as they implement their digital transformation initiatives and opine that we may see more interests from enterprises for this technology in 2023, although it could take private 5G three to six years to reach early majority adoption.


AI Semiconductor Chips

AI chips are a type of semiconductor device optimized for processing AI workloads. Such chips typically have parallel processing units with local memory, both of which are optimized for AI and processing. They are designed for use in data heavy environments, typically operate in conjunction with a general-purpose CPU, offloading specific parts of the CPU’s workload. Their capabilities can be used in edge computing, endpoint devices or data center equipment as they are optimized to execute an AI workflow (for example, DNN training or inferencing) and for specific types of AI workloads or DNN, such as convolutional neural networks (CNNs) or recurrent neural networks (RNNs). Much of the initial work developing DNN-based AI applications takes place using high-performance GPU-based systems located in large data centers with a focus on the training of DNNs. While GPUs are a good general-purpose solution with an established SW ecosystem, several semiconductor vendors and startups are developing dedicated AI chips specifically optimized for running these workloads in resource-constrained edge computing deployments. We at Service Ventures believe AI chips could see faster adoption because of high market demands for specific AI/DNNs, such as CNNs for video analytics and RNNs for audio processing.


Open-Source Software

Most people don’t realize that when they turn on a new Netflix show, that viewing experience is run on Linux – an opensource operating system widely used in public clouds. The backend, Netflix uses Amazon Web Services (AWS) with servers that run Linux open-source Operating System. Open source has enabled a tremendous amount of innovation and its ecosystem isn’t going away. Just ask the people who build various technologies we use every day and love. A survey of 65,000 SW developers by Stack Overflow showed they prefer to work on open source. However, SW developers are lazy, they don’t want to download and run an open-source SW and manage its integrity due to future releases. Companies that monetize open-source projects understand this, so they provide a cloud-based service for developers to access and use through APIs in the SW application they’re building. The acceleration in revenues for companies like MongoDB, Elastic, Confluent and so on happened only after they set up a cloud-based service that SW developers were able to access via APIs. Over the last few years, big companies have been built on open-source platforms. Some have been acquired, like Red Hat, which IBM bought for $34B or GitHub, which Microsoft bought for $7.5B. Proponents of Open-Source SW are very religious about its existence and believe that this could be the future of enterprise SW.


Digital Twins

A digital twin is a virtual representation of an entity such as an asset, person, or process. It is developed to support business objectives. Key elements of the technology include the SW model, data, unique one-to-one association, and monitorability. These building elements are built, used, and shared in enabling technologies such as analytics software, IoT platforms or simulation tools. Currently digital twins are being deployed in asset-intensive industries, such as oil and gas, and manufacturing. Airports, real estate management, and other organizations that need to monitor people for health and safety purposes and conduct COVID-19-related compliance reporting, are also increasingly using digital twins. Medical institutions are going beyond patient health records to develop digital twins of patients. Standards organizations and consortia are emerging, such as the Digital Twin Consortium and the National Digital Twin Programme at the Center for Digital Built Britain. Market is seeing new offers from technology providers as well as increased adoption from advanced enterprises. Technology service providers are also beginning to effectively link their digital twin enabling technologies to well-defined business outcomes. Although average enterprises’ understanding of digital twin potential remains low, we believe that 2023 could the year for these enterprises to trial some of the digital twin potential available in the market.


Automated Cyber Security Remediation

Modern enterprises can take advantage of a plethora of security tools that are very good at detecting and generating alerts about risks. What most tools historically haven't done, however, is respond to security issues due to varied nature and number of such alerts. That's a task that has conventionally fallen to IT engineers, who must view those numerous alerts, then formulate and execute a response plan. In many cases, this type of manual response isn't necessary because it happens that security risks boil down to relatively simple issues - a misconfigured firewall rule or an overly permissive access control setting. Those could be corrected automatically by SW, instead of humans. Realizing this, forward-thinking enterprises are making use of automated remediations to improve security outcomes and minimize the time it takes to mitigate risks. Security Ops people believe that complex issues will always require human intervention, but the bulk of mundane, tedious security response operations could be managed with automated tools.


AR Cloud

AR Cloud enables the unification of physical + digital worlds by delivering persistent, collaborative, and contextual digital content overlaid on people, objects, and locations to provide people with information and services directly tied to every aspect of their physical surroundings. For example, an individual can receive fare, route and schedule information about public transit based on their context (personal status, geolocation, calendar appointment, travel preferences, etc.) by simply “looking at” a bus or bus station with their phone, tablet, or head-mounted display. Such a technology requires numerous building blocks such as edge networking, high bandwidth and low-latency communications, standardized tools, and content types for publishing into the AR Cloud, management and delivery of content, and interoperability to ensure seamless and ubiquitous experiences. Industry observers believe some of AR Cloud infrastructure will be ushered in by the arrival of low-latency, wireless networking (5G will serve as an enabling tech), while others are still being developed (graph technologies). We are optimistic that AR Cloud will transform how people will interact with the world around them. It will provide a digital abstraction layer for people, places and things and will space across business and consumer applications and impact every industry regardless of geography. The hype around the Metaverse also has driven further interest by introducing the potential for new business models and monetizing opportunities of physical digital interactions. In 2023, innovators could seriously embark on building the foundation of such a ubiquitous cloud.


Synthetic Data for AI

Synthetic data is a class of data that is artificially generated and not obtained from direct observations of the real world. And synthetic data is one solution to the problem of a lack of sufficient data to train AI models. Such data can be generated using different methods such as statistically rigorous sampling from real data, semantic approaches, generative adversarial networks, or by creating simulation scenarios where models and processes interact to create completely new datasets of events. Synthetic data can also be generated for a wide range of data types. While row, record, image, video, text, and speech applications are common data types, R&D labs are expanding the concept of synthetic data to graphs. Synthetically generated graphs will resemble but not overlap the original. Such fictious data enables the anonymization of personally identifiable information data for sharing and analysis. Synthetic data will ultimately apply across different usage styles - data annotation, data anonymization, data enhancement and data generation. Technologists in Data Analytics and AI community believe that there is a wide opportunity for innovators and will take three to six years to achieve early majority adoption. To meet increasing demand for synthetic data for natural language automation training, especially for chatbots and speech applications in the sort term, new and existing vendors could bring offerings to market in 2023.


Cloud WAN

Enterprises have long recognized the importance of optimizing network connectivity as they can't deliver reliable, high-performing services if they lack stable, high-bandwidth, low-latency network connections. Historically, enterprises implemented those reliable connections by building their own IP/MPLS networks or relying on networking services from colocation (CoLo) providers. They interconnected their data centers, for example, or built direct connections between cloud environments and on-prem infrastructure. Networking gurus believe that going forward in 2023, enterprises may implement high-performance networks in new ways. Instead of relying on interconnection services, they'll be increasingly turning to a new breed of wide area network (WAN) solutions from cloud providers - like AWS WAN and Azure Virtual WAN. Enterprises may use WAN services from cloud providers in days instead of months. Such services offer the benefit of being much faster and simpler to implement than traditional interconnections. In addition, enterprises may pay for those services using consumption-based pricing. That's currently not possible when enterprises build their own networks, which are often over-built to support periods of peak demand that occur only occasionally, meaning businesses pay for capacity that they don't consistently use.


Multimodal Human-Machine Interface

Multimodal Human-Machine Interface is a high-level design model in which user and machine interactions can occur simultaneously via a combination of various user-spoken or user-written natural language, as well as via touch (on a screen). Data can be processed from various data sources beyond text, including images, video, tables, maps, audio, gesture, motion, myoelectric, brain-computer interface, and eye movement. Fusing vision, audio, voice, and other inputs can support multiuser, metacontext conversations in various applications and dramatically change how humans do various complex tasks. It is the next evolution of conversational UI and can happen across enterprise applications, mobile apps, Virtual Assistants, HW devices and IoT. AR/VR experts believe that the long-term impact of Multimodal UI will be very high, as it will transform all types of interactions between humans and machines, as well as enable more natural search and assist capabilities. Additionally, the availability of frameworks from NVIDIA and Google will help accelerate and democratize the development of multimodal UI-enabled experiences and applications across a broad spectrum of developers.


Graph Technologies

Graph technologies encompass a wide variety of solutions that work with data represented as a set of nodes and edges instead of tables, rows, and columns. It allows users to find relationships between people, places, things, events, locations, and so forth across diverse data. Specific use cases that require analysis across an exponentially larger amount of heterogeneous data drive interest in graph analytics. Analyzing relationship data can require a large volume of heterogeneous data, storage, and analysis - all of which is not well-suited to relational databases. Increased understanding and collaboration with business users, organizing and preparing data for downstream processes, uncovering hidden insights, governance, security, improving ML model creation, and providing explainable AI are just some of the use cases driven by different graph technologies. COVID-19 spiked the interest in graph analytics to more than 90% in healthcare management, clinical research, and healthcare supply chains. Highly complex models were developed and used in ML with the output stored in graph databases. Other areas for graph analytics could include fraud detection, cybersecurity, supply chain optimization, customer 360, social network analysis and workforce analytics. Graph technology is forming the foundation of many modern data analytics capabilities. It is becoming an integral part of data management technologies including data integration, metadata management, master data management, data catalogs, data analysis, and data stewardship. With increased demand for Graph Technologies globally, we believe that in the near-term, a significant innovation will be ushered in the coming years, starting in 2023.


Air Gap Storage & Backup

Backups are an important part of every organization's data recovery plan and should be always protected. Air gapping is a security measure that physically or logically isolates at least one copy of a data backup. Air gapped backups are one of the best ways to keep sensitive information secure while still providing assurance that a good copy of the last backup will be accessible when you need it most. Air gapped storage volumes cannot be accessed by applications, databases, users, and workloads running on the production environment. The general objective of air gapping is to keep malicious entities away from the last copy of an organization's digital assets. Air gap backups serve two primary purposes. First, they prevent at least one copy of a backup from being manipulated or destroyed. Second, they help ensure quick restores because the integrity of an isolated, air-gapped backup can be trusted. The key idea behind air gapping backups is that even if all the data on a primary system gets compromised, there will be a foolproof copy that can be used to restore data. Since air gapped backups do not have network access, even if someone hacks into a network, they would not be able to access and change the backup unless they are physically present at the backup's location and have the right access credentials. When a backup is physically or logically isolated from production servers and networked storage systems, it can't be changed by unauthorized entities or become corrupted by malware. A malicious entity could be a virus, malware, an attacker, an unauthorized insider, ransomware, a human mistake, or an unexpected power outage that could corrupt backed up data accidentally. We believe that with the increase in demand for security and compliance, the advantages of air-gapped backups cannot be overstated.


Peer-to-Peer Edge Computing

Much of the early focus on edge computing has been extending intelligent things to the cloud and extending cloud capabilities closer to users and things at the edge. As more things connect at the edge itself, there will be a growth in processing, interaction and decision making across things — and systems at the edge. Peer-to-peer edge computing enables distributed computing across an edge environment for resilience, workload orchestration, horizontal scaling, and interaction and cooperation between edge computing nodes using local or mesh networking as an enabling technology. These systems can leverage each other for resilience, horizontal scaling, and orchestration of work. The technology is far into the future as it requires standards for networking, interoperability, and orchestration and solutions for security. But we think peer-to-peer edge computing will be essential for certain use cases (e.g., robot swarms, collaborative immersive environments), creating standards that will spread to adjacent industries and use cases. There will be specific edge computing situations that could benefit greatly from peer-to-peer edge computing sooner. Over time, as digital capabilities are added to more devices with increasing processing power, peer-to-peer edge computing may become a more common way that edge decision making is handled.


Generative AI

Generative AI refers to AI techniques that learn a representation of artifacts from the data and use it to generate brand-new, completely original artifacts that preserve similarity to original data. Generative AI can produce totally novel media content (text, image, video, and audio), synthetic data and models of physical objects. Generative AI can be used in drug discovery or for the inverse design of materials having specific properties. The fast progress of various ML transformer models capable of generating novel artifacts is top of mind in the AI community. Notably, GPT-3 by OpenAI and AlphaFold 2 by Google’s DeepMind, both of which use transformers, were recently the main AI news. Compute resources for training large generative models are high and are not affordable to most vendors. But efficient Generative AI SW is becoming more accessible, many generative techniques are new, and more are coming to the market. Recently a growing number of life sciences companies are examining generative AI to accelerate drug developments. Synthetic data produced with Generative AI techniques supports the accuracy and speed of AI delivery. Experts believe that exploration of Generative AI methods will only grow in additional industries, including, material science, media, entertainment, automotive, aerospace, defense, and energy industries. People believe that Generative AI could disrupt software coding. When combined with existing development automation techniques, it could automate majority of the work done by programmers.


Homomorphic Encryption

Homomorphic encryption is a cryptographic method that enables third parties to process encrypted data while having no knowledge about the underlying data itself or the results. This technology protects data in use but does not address data at rest or in transit, which must be addressed separately. There are plenty of opportunities to exploit its potential, in discrete use cases that regard data privacy. Privacy and data security mandates continue to emerge globally, with examples such as the EU’s General Data Protection Regulation (GDPR), PCI standards, the California Consumer Privacy Act, Australia’s Privacy Act. All these mandates are expected to oblige providers and customers to evaluate their use and exchange of data between third-party entities. Homomorphic encryption-based processing remains 1,000 to 1,000,000 times slower than equivalent plain-text efforts, although some commercial use cases are reaching 10 times to 100 times. The computational overhead remains too heavy in most general computing scenarios. Much work remains to optimize SW infrastructures to broaden the scope of practical applications of this technology. But security experts believe that homomorphic encryption could be a core technology for many future SaaS offerings to ensure the protection/privacy of data between third-party data processing and analytics providers. We think that in the near term, performance issues, lack of standardization and complexity could slow progress to the early majority stage. The technology will benefit from continued advances in HW performance in years to come and 2023 may be the beginning of that trend.


AI Enabled SW Development Tools

AI enabled SW Development tools is the use of AI technologies such as ML, NLP, and similar technologies to aid SW engineering teams in creating and delivering applications efficiently, faster, at lower development costs, and at higher quality. These tools commonly integrate with an SW engineer’s existing tool systems to provide them with real-time intelligent feedback and suggestions throughout the development process. Propelled by the rapid growth of software code, the data generated by digital applications and cloud computing, these AI tools could gain capabilities that will transform the SW development life cycle. These new tools will increase developer velocity by recommending highly relevant code and library recommendations in a fraction of the time it would take otherwise, will augment SW Quality and Test Engineers by allowing tests to self-heal and by automatically creating test sets. In particular, the use of AI to build other AI models will increase the ability of enterprise employees to create models that add value to applications and data in the business. Developers forecast these technologies will be valued highly in B2B Enterprise IT sector and 2023 could be a path breaking year for these technologies.


6G

6G is the generic name for the next-generation cellular wireless that is expected to be next in line after 5G-Advanced. It will enhance recent 5G capabilities and will be able to provide higher theoretical peak data rate (e.g., 100 Gbps to 1 Tbps), lower latency (e.g., 0.1 msec latency), more connection density and energy efficiency (e.g., 10 times more efficient). Some leading countries have started their initiatives: In August 2020, the South Korean government announced that the country plans to launch a pilot project for 6G in 2026. In October 2020, the Alliance for Telecommunications Industry Solutions (ATIS) in the U.S. launched the Next G Alliance to advance North American Leadership in 6G. In April 2021, the U.S. and Japan agreed to jointly invest $4.5 billion for the development of next-generation communications known as 6G. Design and research for 6G has already begun with many industrial associations, academic and commercial organizations. Telecommunication executives believe that in 2023, many technologies and concepts from 6G research could slowly find their way into various wireless systems (cellular and otherwise) for early testing.


Encryption Metadata Analytics

Encrypting traffic on the sending device is relatively inexpensive and ensures that no one who intercepts the traffic can read its contents. 80% to 90% of enterprise network traffic crossing the edge is encrypted. While encryption is a great method to protect an organization’s data and communications, it is also a great tool for bad actors to communicate and attack and infiltrate an organization. Unless an organization can decrypt the traffic, which may not be possible (or legal) in many cases, then an organization’s inspection technology is not able to detect the malicious activity. Even when decryption is possible, it is computationally very expensive, leading to overloaded or significantly decreased performance. That’s why a substantial amount of enterprise traffic today goes uninspected simply because it is encrypted and no point in analyzing the traffic with high resource commitment. This is not acceptable. That’s where encryption metadata analytics can help - without decrypting the traffic, the patterns of communication in the traffic can be used to fingerprint and identify malicious activity. Frameworks, such as open-source JA3 is often the starting point. It measures characteristics of the traffic that are not obscured with decryption, from simple things like the source of the communication, to more complicated analysis that recognizes patterns in the size and frequency of the packets in the traffic. These patterns become behavioral fingerprints of the encrypted communication and in return they are compared to the behavioral fingerprints of known malicious communications. Cyber experts believe this technology would greatly simplify an organization’s obligation to comply with privacy regulations for employees and customers. It is becoming more popular with NDR and SIEM vendors today, and in 2023 we could see more serious interests from security vendors for this technology.


Privacy Enhancing Computation

A lot of the data collection that powers the internet, and particularly advertising, might be done in much less invasive ways. Cryptographers have been working on technologies like multiparty computation, zero-knowledge proofs, and homomorphic encryption for years, but they’ve finally gotten good enough that they’re practical for real world problems. We saw a little bit of this in 2022 but between the W3C Private Advertising Technology Community Group and the work on Privacy Preserving Measurement in the IETF, this may be a real area to watch in 2023. It may be the start of us having real tools to work with people’s data while preserving their privacy.


Immersion Cooling of Compute Infrastructure

Immersion cooling is a type of data center server cooling system that immerses server boards in a nonconductive heat transfer liquid, typically built using an immersion container in a dense, closed system. Immersion cooling systems can deliver better cooling efficiency than the widely used passive and forced-air cooling systems. It is more effective than the traditional liquid cooling solutions that route liquid coolant to specific high-power components within the server. Immersion cooling enables high-performance computing systems to be operated in very dense configurations or environments where air-based cooling may not be viable or effective, such as edge infrastructure deployments in a non-data center environment. It permits servers to operate in constrained environments, such as 5G network control nodes and IoT edge servers. HW designers believe that immersion cooling will be integral to edge compute infrastructure as edge compute systems may reach densities where air cooling may no longer be cost-effective and deployment locations may unable to support efficient air cooling.


Cloud-Locked Semiconductors

A semiconductor, such as a system-on-chip (SoC) or embedded processor, can be locked to use only a specific cloud service by design - routing all communication through a security circuit. This mechanism can be used to restrict communications to a specific cloud provider, such as Microsoft’s Azure platform, providing a high level of security and protection from hacking. Alternative systems are generally software-based, authenticating communications and commands based on an installed cryptographic key. Such systems remain vulnerable to a low-level attack that manages to rewrite the device firmware, replacing the keys or bypassing the authentication process. Locking the semiconductors to a specific cloud provides a measure of protection even if the endpoint operating system has been compromised in this way. A similar mechanism is used by cellular networks, which use a removable SIM with embedded credentials locked to a specific network operator — this security remains in place even if the software of the smartphone is completely compromised. Having the hardware locked to a specific cloud passes responsibility for the security of communication to the cloud provider, simplifying product development for the IoT developer. Limited availability of semiconductors has discouraged adoption by product developers, but that is changing, and users are expecting to see adoption accelerate over the next year.


Self-Supervised Learning

Self-supervised learning is an approach to Machine Learning in which labeled data is created from the data itself, without human supervisors that provide labels. This is achieved by masking elements in the available data (e.g., a part of an image, a sensor reading in a time series, a frame in a video or a word in a sentence) and then training a model to “predict” the masked/missing element. Thus, the model learns how information relates to other information, for example, how situations typically precede or follow another, and which words often go together. AI influencers believe that Self-Supervised Learning will have a very high impact because it aims to overcome one of the biggest drawbacks of supervised learning: the need for large amounts of quality labeled data. This is not just a practical problem in many organizations with limited relevant data or where manual labeling is prohibitively expensive. It is also a fundamental problem in current AI, in which the learning of even simple tasks requires a huge amount of data, time and energy. In self-supervised learning, labels can be generated from relatively limited data. Self-supervised learning enables models to represent concepts and their spatial, temporal, or other relations in a particular domain. These models can be fine-tuned using “transfer learning” for one or more specific tasks with practical relevance. In addition to supporting specialized model development, it may also shorten training time and improve the robustness and accuracy of models. Self-supervised learning is an important enabler for a next main phase in AI, overcoming the limitations and going beyond the current dominance of supervised learning. It recently emerged from academia and is currently only practiced by a limited number of innovative AI companies. Self-supervised learning currently depends on the creativity of highly experienced ML experts to design a self-supervised learning task, based on masking available data, allowing a model to build up knowledge and representations that are meaningful to the business problem at hand. Tool support is still virtually absent, making implementation a knowledge-intensive and low-level coding exercise. Experts in our circle believe that the potential impact and benefits of self-supervised learning are very large, as it will extend the applicability of machine learning to environments that do not have the availability of large datasets or rely on unlabeled data such as computer vision, natural language processing, IoT analytics, continuous intelligence, and robotics.


Quantum Technology

Quantum technology is a type of nonclassical computing that operates on the quantum state of subatomic particles. The particles represent information and elements denoted as quantum bits (qubits). A qubit can represent all possible values of its two dimensions (superposition) until read. Qubits can be linked with other qubits, a property known as entanglement. Quantum algorithms manipulate linked qubits in their entangled state, enabling future system designs that can potentially address a set of use cases that classical systems cannot handle. Quantum computers are not general-purpose computers. Rather, they are accelerators for a limited number of algorithms with orders of magnitude of speedup over conventional computers. Quantum systems face challenges in scale, noise and connectivity that require as-yet unknown breakthroughs to offer business value above and beyond. They require a complex hybrid ecosystem of physical technologies, often involving extremely low temperatures, vacuum environments, and lasers, combined with high performance general-purpose computer systems to control and manage the quantum elements. Like the semiconductor industry, it can help accelerate AI and machine learning. We began seeing quantum emerge as a key Enterprise IT trend almost 2 yrs. ago and IT leaders across top organizations were already exploring quantum. Recent developments, including Google’s quantum computer, which can perform calculations that would take a classical supercomputer hundreds of years and both IonQ and Honeywell announcing quantum machines with record computing powers, suggest a quantum boom is on the horizon. Researchers believe Quantum technology is on pace to become multibillion dollar industry. Although the disruptive impact of quantum technology could be years away, product leaders at technology and service providers are planning to engage with quantum computing development. In 2023, we may see more critical progress.


Hyperautomation

Hyperautomation refers to a combination of tools that can integrate functional and process silos to automate and augment various enterprise processes. There are three main types of hyperautomation: data ingestion, integration, and process visibility. On the data ingestion side, structured and unstructured documents need to be converted from paper and relevant data that has been reliably extracted. For moving data, it is notably intelligent business process management suites (iBPMS), iPaaS, Low Code application platforms and RPA. Common use cases for hyperautomation are found in most front-, middle and back-office processes, predominantly customer onboarding, order taking, payments and customer data updates, which are typically highly manual. In addition, regulatory compliance, employee onboarding, and product tracking across supply chain systems for retail and manufacturing, as well as tracking for transportation and logistics. The toolbox includes a wide array of technologies for structured and unstructured data ingestion, process mining, integration support like iPaaS, RPA, BPM, workflow engines, decision management suites, Low Code Platforms, and others. Currently no one vendor has all these elements needed to perform hyperautomation. SW pioneers believe that going forward in 2023, every large and medium organization will have a hyperautomation strategy for process mining, to ingest data and to integrate data.


Photonics & Lights Based Computing

Photonic and light-based computing uses photons for data transmission, instead of the electrons used in traditional digital logic. These computing systems utilize lasers to generate the photons and combine electronics, silicon photonics and algorithms to build a computing system. While photonic computing is at a very early stage of development and still unproven on a large commercial scale, these systems promise a significant increase in processing bandwidth. They are very energy-efficient when compared to today’s high-performance data center systems based on silicon technologies. Electronic engineers believe that much work is remaining to create more efficient miniaturized photonics circuits. As a reference, current optical switches are 1,000 to 10,000 times the size of silicon transistors. This is not a problem for simple circuits, but it is challenging for complex inter-connected systems. Optical channels and switches do not scale with Moore’s Law. Hence, this limits development of photonic computing systems. Industry believes that the initial adoption will be in use cases that have high throughput requirements, such as Deep Learning AI use cases, such as image and video processing, natural language understanding, and robotics.


Secure Access Service Edge (SASE)

Secure access service edge delivers five key converged network and security capabilities: Software-defined WAN (SD-WAN), Secure Web Gateway (SWG), Cloud Access Security Broker (CASB), Network Firewall (FW) and Zero Trust Network Access (ZTNA). It is primarily delivered as a service and enables dynamic zero trust access based on the identity of the device or entity, combined with real-time context, and security and compliance policies. The most transformative of this trend is the change from on-premises-based appliances to cloud-based services. Serving the above five capabilities from the cloud edge is fundamental to SASE. Changes to security product architecture and buyer preference are sizable, that could give SASE a particularly large influence on the security market in 2023, according to Networking gurus.


Workplace Productivity Tech

When Slack surveyed 3,000 knowledge workers in the U.S. on their attitudes towards remote work, only 12% said they want to return to the office full-time, and 72% said they want a hybrid office-remote working life. While that means a lot less orientation around the office than before, it’s clear that employees still want places where they can get together. They want a home base and the opportunity to meet in person for team events, off-sites, all-hands and so on. And for strategic cases where there are teams that need to work in person, they’ll be able to have a place to gather. The net-net is that offices will be more about collaboration and teamwork than working in a silo. A new technology trend that’s taking off is centered around workplace productivity and analytics. The move towards remote work has caused companies to take a closer look at how their employees spend their time, and make sure that they’re productive with the work they’re doing.



/Service Ventures Team

27 views0 comments

留言


bottom of page