Zephin Livingston, Author at eWEEK https://www.eweek.com/author/zlivingston/ Technology News, Tech Product Reviews, Research and Enterprise Analysis Wed, 05 Oct 2022 20:56:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.3 Guide to Algorithms in AI https://www.eweek.com/big-data-and-analytics/algorithms-ai/ Wed, 05 Oct 2022 20:56:19 +0000 https://www.eweek.com/?p=221460 If you use the Internet in any capacity, you will inevitably run into algorithms. From Google’s search engine to Facebook’s timeline algorithms to the systems that help financial institutions process transactions, algorithms are the foundation of artificial intelligence.  Despite being core to our digital lives, algorithms aren’t often understood by anyone besides the people who […]

The post Guide to Algorithms in AI appeared first on eWEEK.

]]>
If you use the Internet in any capacity, you will inevitably run into algorithms. From Google’s search engine to Facebook’s timeline algorithms to the systems that help financial institutions process transactions, algorithms are the foundation of artificial intelligence

Despite being core to our digital lives, algorithms aren’t often understood by anyone besides the people who create them. Infamously, despite supporting nearly 400,000 full-time creators with its platform, Youtube’s algorithm – it recommends videos and spotlights channels related to users’ interests – is known for being an oblique black box by which creators feast and famine.

This article will shine a light on this fundamental aspect of the tech industry.

Also see: Top AI Software 

What is an Algorithm?

In basic terms, an algorithm is a set of solidly-defined steps which need to be taken in order to reach a planned result. In particular, it is used to solve mathematical equations. It can be broken up into three broad components:

  • Input: The information you already know at the beginning of the problem.
  • Algorithm: The sequence that needs to be followed step-by-step to achieve.
  • Output: The expected results if all steps in the sequence are followed to the letter.

An example of an algorithm-like system outside of the tech world would be cooking recipes. You have your input (the ingredients), you have your algorithm (the steps of the recipe which need to be followed more or less exactly), and you have your output (a hopefully-edible dish).

We’re not kidding when we say algorithms are part of the atomic structure of our digital lives, either. Any computer program you utilize is running multiple algorithms to perform its functions. From your web browser to your word processor to the Microsoft Solitaire that has been included with Windows since 3.0, every single one of them runs off of algorithms.

Also see: The Future of Artificial Intelligence

How Do Algorithms Work in AI?

Fundamentally, artificial intelligence (AI) is a computer program. Meaning that, like Firefox or Microsoft Word or Zoom or Slack, any AI or machine learning (ML) solution you come across will be built from the ground-up with algorithms.

What algorithms do in AI, as well as machine learning, is variable. Broadly speaking, they define the rules, conditions, and methodology an AI will use when processing and analyzing data. This can be as simple as defining the steps an AI needs to take to process a single invoice to having an AI filter out pictures with dogs among a dataset containing hundreds of thousands of pictures.

Algorithms in machine learning help predict outputs even if given unknown inputs. AI algorithms function similarly by solving different categories of problems. The types of problems that AI algorithms solve can be divided into three broad categories:

  • Classification: A type of machine learning which is used to predict what category, or class, an item belongs to. One example would be programming an AI to differentiate between spam messages and messages you actually need.
  • Regression: A type of machine learning which is used to predict a digital label based on how an object functions. One example would be using historical data to forecast stock market prices and projections.
  • Clustering: A type of machine learning which is used to sort objects into groups based on similarities in their functionality. One example would be using an algorithm to sort through a set of financial transactions and picking out instances of potential fraud.

Also see: How AI is Altering Software Development with AI-Augmentation 

Types of AI Algorithms

Classification Algorithms

Below are some examples of classification algorithms used in AI and machine learning.

Binary Logistic Regression

Binary logistic regression can predict a binary outcome, such as Yes/No or Pass/Fail. Other forms of logistic regression, such as multinomial regression, can predict three or more possible outcomes. Logic regression can often be found in use cases like disease prediction, fraud detection, and churn prediction, where its datasets can be leveraged to assess risks.

Naive Bayes

Naive Bayes is a probability algorithm built off of incorporating independence assumptions into its models, meaning it operates off the assumption that no two measurements in a dataset are related to each other or affect each other in any way. This is why they’re called “naive.” It’s commonly used in text analysis and classification models, where it can sort words and phrases into specified categories.

K-nearest Neighbors (k-NN)

While also sometimes used to solve regression problems, k-NN is most often used to solve classification problems. When solving classification problems, it separates data points into multiple classes onto a plane to predict the class label of a new data point. The new data point is given a new classification based on which class label is most often represented around it on the plane. k-NN is also known as a “lazy learning” algorithm, which means it doesn’t undergo a full training step, instead only saving a training dataset.

Decision Tree

A supervised learning algorithm, decision trees can also be used for either classification problems or regression problems. It’s called a “tree” because it possesses a hierarchical structure. Starting with a root node, it branches out into smaller internal or decision nodes where evaluations are conducted to produce subsets, which are represented by terminal or leaf nodes.

An example would be starting with a root node for martial arts which are then split into internal nodes for martial arts with a striking focus and martial arts with a grappling focus. These internal nodes can then split into terminal nodes for specific martial arts like boxing, jiu-jitsu, and Muay Thai. These algorithms are great for data mining and knowledge discovery tasks because they’re easy to interpret and require very little data preparation to be deployed.

Random Forest

Random forests leverage the output of multiple decision trees to produce a prediction. Like decision trees, random forests can be used to solve both classification and regression problems. Each tree is made up of a data sample drawn from a training dataset that uses sampling with replacement. This adds randomization to the decision trees, even if they draw from the exact same dataset.

In classification problems, a majority vote is determined from the output of these randomized decision trees. For example, say there are 10 decision trees dedicated to determining what color a dress is. Three sets say it is blue, two sets say it is black, four sets say it is pink, and one set says it is red. The dress would be categorized as pink.

Random forests are the algorithm of choice for finance-focused machine learning models, as it can lower the time taken for pre-processing and data management tasks. Fraud detection, option pricing, and customer credit risk evaluation are all examples of its use in finance. The random forest algorithm is trademarked by Leo Breiman and Adele Cutler.

Also see: Best Machine Learning Platforms 

Regression Algorithms

Below are some examples of regression algorithms used in AI and machine learning.

Linear Regression

An algorithm with use in both statistics and the social sciences, linear regression is used to define the linear relationship between a dependent variable and an independent variable.The goal of this sort of algorithm is to determine a possible trend line with the given data points. Businesses often use linear regression when determining how revenue is affected by advertising spending.

Poisson Regression

Poisson regression is a type of regression where a predicted variable is always assumed to follow a Poisson distribution. A Poisson distribution is a probability function that can help determine the probability of a given number of events happening within a specific, fixed time period.

For example, you could use Poisson regression to determine how likely a classroom of high schoolers is to solve a Rubik’s Cube within 24 hours. Or, you could predict how likely a restaurant is to have more customers on specific days based on the average number of diners they serve in a week.

Ordinary Least Squares (OLS) Regression

One of the most popular regression algorithms, OLS regression takes ordinal values as input to determine the linear relationship between multiple variables. The algorithm is most useful when predicting the likelihood of something being ranked on an arbitrary scale, such as how likely a game is to be rated a 7 on a scale of 1–10. It’s often used in the social sciences, since surveys in that field frequently ask participants to evaluate something on a scale. OLS regression is also known as ranking learning.

Lasso (Least Absolute Selection and Shrinkage Operator) Regression

Lasso regression takes an OLS regression and adds a penalty term to the equation. This can help you create a more complex representation of data than is otherwise possible with simple OLS. It can also make the representation more accurate. Lasso regression is also known as L1 regularization.

Neural Network Regression

Neural networks are one of the most popular methods of AI and ML training out there. As the name implies, they’re inspired by the human brain and are great at handling datasets that are too large for more common machine learning approaches to consistently handle.

Neural networks are a versatile tool and can perform regression analysis as long as they are given the appropriate amount of prior data to predict future events. For example, you could feed the neural network customers’ web activity data and metadata to determine how likely a customer is to leave your website without buying anything.

Check Out: Top Predictive Analytics Solutions

Clustering Algorithms

Below are some examples of clustering algorithms used in AI and machine learning.

K-Means Clustering

An unsupervised learning algorithm, k-means clustering takes datasets with certain features and values related to these features and groups data points into a number of clusters. The “K” stands for the number of clusters you’re trying to classify data points into. K-means clustering possesses a number of viable use cases, including document classification, insurance fraud detection, and call detail record analysis.

Mean Shift Clustering

A simple, flexible clustering technique, mean shift clustering assigns data points into clusters by shifting points toward the area with the highest density of data points (called a mode). How a cluster is defined in this setting can be dependent on multiple factors, such as distance, density, and distribution. It’s also known as a “mode-seeking algorithm.” Mean shift clustering has uses cases in fields like image processing, computer vision, customer segmentation, and fraud detection.

Density-Based Spatial Clustering of Applications with Noise (DBSCAN)

DBSCAN separates high-density clusters from one another at points of low data point density. Netflix’s movie recommendation algorithm uses a similar clustering method to determine what to recommend to you next.

For example, if you watched the recent Netflix movie “Do Revenge,” the algorithm would look at other users who also watched “Do Revenge” and suggest movies and shows based on what those users watched next. DBSCAN is excellent at handling outliers in datasets. Viable use cases for DBSCAN include customer segmentation, market research, and data analysis.

Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH)

BIRCH is a clustering technique often used for handling large datasets. It can scan an entire database in a single pass and focuses on spaces with high data point density within the database and provides a precise summary of the data.

A common way to implement BIRCH is to do so alongside other methods of clustering that can’t handle large datasets. After BIRCH produces its summary, the other clustering method runs through the summary and clusters that. As such, BIRCH’s best use cases are for large datasets that normal clustering methods cannot efficiently process.

Gaussian Mixture Model (GMM)

Much like Poisson regression utilizes the concept of Poisson distribution, GMM models datasets as a mixture of multiple Gaussian distribution models. Gaussian distribution is also known as “normal distribution,” and as such, it can be intuitive to assume that a dataset’s clusters will fall along the lines of a Gaussian distribution.

GMMs can be useful for handling large datasets, as it retains many of the benefits of singular Gaussian models. GMM has found use in speech recognition systems, anomaly detection, and stock price prediction.

Want to See What Exciting Things Companies Are Doing With AI Algorithms? Take a Look at Top Natural Language Processing Companies 2022

The post Guide to Algorithms in AI appeared first on eWEEK.

]]>
Top Natural Language Processing Companies 2022 https://www.eweek.com/big-data-and-analytics/natural-language-processing-companies/ Thu, 22 Sep 2022 20:25:30 +0000 https://www.eweek.com/?p=221427 As more and more companies adopt artificial intelligence (AI) in a variety of sectors, these AI are inevitably put in positions where they have to interact with human beings. From customer support chatbots to virtual assistants like Amazon’s Alexa, these use cases necessitate teaching an AI how to listen, learn, and understand what humans are […]

The post Top Natural Language Processing Companies 2022 appeared first on eWEEK.

]]>
As more and more companies adopt artificial intelligence (AI) in a variety of sectors, these AI are inevitably put in positions where they have to interact with human beings. From customer support chatbots to virtual assistants like Amazon’s Alexa, these use cases necessitate teaching an AI how to listen, learn, and understand what humans are saying to it and how to respond.

One method for teaching AI how to communicate with humans is natural language processing (NLP). Sitting at the intersection at AI, computer science, and linguistics, natural language processing’s goal is to create or train a computer capable of not just understanding the literal words humans say but also the contextual implications and nuances found in their language.

As the AI industry has grown in prominence, so too has the NLP industry. A report from Allied Market Research valued the global NLP market at $11.1 billion in 2020, and it is expected to grow to $341.5 billion by 2030. Within that valuation lies a myriad of both promising startups and experienced tech veterans pushing the science further and further.

History of Natural Language Processing

Natural language processing has been part of AI research since the field’s infancy. Alan Turing’s landmark paper Computer Machinery and Intelligence, in which the famous Turing Test was introduced, includes a task requiring the automated interpretation of natural language.

From the 1950s to the 1990s, NLP research largely focused on the symbolic NLP, which attempts to teach computers language contexts through associative logic. Essentially, the AI is given a human-generated knowledge base designed to include the conceptual components of a language and how those components relate to one another.

Using this knowledge base, the AI can then understand the meanings of words in context through IF-THEN logic. An example of this would be similes. If you said, “He’s as fast as a cheetah,” the AI would understand that the person you are talking about would not be a literal cheetah.

Thanks to increases in computing power starting in the 1990s, machine learning algorithms were introduced into natural language processing. This is when machine translation programs started gaining prominence. Examples you might use would be Google Translate or DeepL.

As the internet grew in popularity through the 2000s, NLP machines gained access to even more raw data to sift through and understand. As such, researchers began focusing on developing unsupervised and semi-supervised learning algorithms. These algorithms were less accurate than supervised learning algorithms, but the sheer quantity of data they processed can offset these inaccuracies.

Today, many natural language processing AIs use representational learning and deep neural network-style machine learning techniques to develop more accurate language modeling and parsing capabilities.

Read More At: What Is Artificial Intelligence?

Benefits of Natural Language Processing

Using natural language processing in business has a number of benefits. For instance, NLP programs used in customer support roles can be active 24/7 and can be cheaper to implement and maintain than a human employee. This makes NLP a potential cost-saving measure.

NLP can also be used to nurture leads and develop targeted advertising, ensuring that an organization’s products are being put in front of the eyes of the people most likely to buy them. This can help boost the effectiveness of human marketing teams and drive revenue up without necessarily needing to spend money on more widespread advertising campaigns.

Natural language processing can also be used to boost search engine optimization (SEO) and help make sure a business stays as high in the rankings as possible. NLP can analyze search queries, suggest related keywords, and help save time on SEO research, giving businesses more time to optimize their content quality.

Top Natural Language Processing Companies

Google

One of the biggest names in AI and tech, Google naturally has a long history of utilizing NLP in its products and services. Just this year, one of its researchers asserted that one of the company’s Language Model for Dialogue Applications (LamDA) was sentient, thanks in large part to its responses to the researcher via text chat. Google even began public testing of LamDA in late August 2022.

In terms of product offerings, it has a Natural Language API which allows users to derive new insights from unstructured text. Its AutoML provides custom machine learning models to better analyze, categorize, and assess documents. The Dialogflow development suite can be deployed in a variety of different settings to create conversational user interfaces such as chatbots on websites, mobile apps, and other platforms.

Finally, Google Cloud’s Document AI solution lets customers automate data capture at scale, allowing them to extract more data from documents without boosting costs.

Read More At: The Future of Artificial Intelligence

Wordsmith

Automated Insights’ Wordsmith platform is touted as the world’s first publicly-available natural language generation (NLG) engine. By inputting information into the engine, users can create clear, understandable content powered by AI.

Being one of the first of its kind, the platform has a number of interesting clients. Notably, the Associated Press has partnered with Automated Insights to power over 50,000 AI-generated news articles, according to Automated Insight’s website.

Wordsmith’s interface is one of the easiest to use on the market with a high degree of customizability. However, initial setup can take longer than expected. Those looking for quick-deployment options might need to look elsewhere. The content output will also likely need some touching up by in-house staff before publication.

Overall, Wordsmith is a solid choice for companies looking for a way to convert large volumes of data into readable, formatted content.

Indata Labs

Based out of Cyprus, Indata Labs leverages its employees’ experience in big data analytics, AI, and NLP to help client companies get the most out of their data. Organizations in industries like healthcare, e-commerce, fintech, and security have made use of Indata Labs’ expertise to generate new insights from their data.

The firm offers a wide range of services and solutions, from data engineering to image recognition to predictive analytics. In the NLP space, the firm offers customer experience consulting, consumer sentiment analysis, and text analysis to ensure clients generate as much value from their datasets as possible.

Indata Labs also maintains its own in-house AI R&D (research and development) Center and works with some of the best computer vision and NLP companies in the world to develop new solutions and push the fields of business intelligence, AI, and natural language processing forward.

IBM

Another tech titan, IBM’s suite of Watson AI products are some of the best on the market. Naturally, Watson’s wide array of services features a number of NLP solutions. Watson Discovery is an intelligent search and text analysis platform which enterprises can use to help find information potentially hidden in their vast stores of data.

Watson Assistant is a customer support platform which collects data from client conversations. Through this, Watson Assistant chatbots can better learn how to make the customer support process less stressful and time-consuming for customers.

Finally, Watson Natural Language Understanding uses deep learning to identify linguistic concepts and keywords, perform sentiment analysis, and extract meaning from unstructured data.

Read More At: The Benefits of Artificial Intelligence

Synthesia

Synthesia is a web-based AI video generation platform. Through its library of video templates, AI voices, and avatars, users can craft videos at-scale to meet whatever needs they might have. Synthesia’s tech has been used by over 10,000 companies, including Nike, Google, the BBC, and Reuters, to create videos in over 60 languages, according to its website.

Other features on the platform include a screen recorder, custom AI avatar crafting, closed captioning, and access to a library of royalty-free background music. If an organization has access to its own library of media assets, they can easily upload and then use these assets in Synthesia.

Intel

A major tech name like Intel is bound to have a whole host of NLP-related services. There is, of course, Intel’s wide array of AI products, from development tools to deployment solutions.

For organizations interested in leveling up their NLP knowledge, Intel offers an extensive natural language processing developer course where students can learn the ins and outs of actually utilizing NLP in AI training.

There is also the Natural Language Processing Architect, a Python library developed by the Intel AI Labs. A Python library is, in essence, a collection of premade collections of code which can be repeatedly implemented in different programs in scenarios. The NLP Architect specifically is meant to help make developing custom NLP-trained AI easier.

MindMeld

MindMeld offers a conversational AI platform through which companies can develop conversational interfaces designed to best suit their apps, algorithms, and platforms.

Through MindMeld, companies have developed and deployed interfaces for food ordering, home assistance, banking assistance, and video discovery. It provides training at each step of the NLP hierarchy, ensuring each level of logic in the process is accounted for.

It’s thanks to this innovative platform that Entrepreneur Magazine placed MindMeld in its 100 Brilliant Companies list in 2015. Companies using MindMeld include Cisco, Appspace, Davra, and Altus.

Microsoft

Microsoft’s reach expands across the entire tech landscape. It’s no surprise that AI, and by extension natural language processing, is one area of interest to the Washington-based tech giant. In fact, Microsoft’s Research Lab in Redmond, Washington, has a group dedicated specifically to NLP research.

Through Microsoft’s Azure cloud computing service, customers can train and deploy customized natural language processing frameworks. The company even offers documentation on how to do so. To utilize NLP in Azure, Microsoft recommends Apache Spark, an open-source unified analytics engine built for large-scale data processing.

Notable features of these customized NLP frameworks for Azure include sentiment analysis, text classification, text summarization, and embedding. Additionally, Microsoft’s Azure AI can support a multilingual training model, allowing organizations to train NLP AI to perform in multiple different languages without retraining.

Read Next: What Is Deep Learning?

The post Top Natural Language Processing Companies 2022 appeared first on eWEEK.

]]>
Inside Job? Hands Up if You Like Payment Implants https://www.eweek.com/it-management/payment-implants/ Thu, 22 Sep 2022 17:24:30 +0000 https://www.eweek.com/?p=221423 In the classic cyberpunk landscapes of William Gibson’s Neuromancer or the popular roleplaying game Shadowrun, the distraught populace squirming under the boot of mega-corporations had two minor luxuries to take solace in. First was the gritty, neon-soaked cityscapes that made dystopia look cool to live in, if you don’t mind dark urban sprawl. Second, body […]

The post Inside Job? Hands Up if You Like Payment Implants appeared first on eWEEK.

]]>
In the classic cyberpunk landscapes of William Gibson’s Neuromancer or the popular roleplaying game Shadowrun, the distraught populace squirming under the boot of mega-corporations had two minor luxuries to take solace in.

First was the gritty, neon-soaked cityscapes that made dystopia look cool to live in, if you don’t mind dark urban sprawl.

Second, body modifications were all the rage. From blades hidden in your forearms to eyes that could see in three different visual spectrums, there were a number of implants and modifications that could turn cyberpunk denizens into whatever kind of human they wanted to be.

So of course, in our real-world sorta-cyberpunk dystopia, we took this beautifully transhumanist concept and chose to make a credit card implant with it.

Also see: 9 Ways AI Can Help Improve Cloud Management

Contactless Transactions Get Physical

Walletmor is a European firm selling surgical implants that let you make contactless financial transactions without the need of a credit/debit card. It’s meant to function identically to a card you tap to give payment. It uses the same near-field communication (NFC) technology as a modern credit card, only activating when put in contact with an authorized payment terminal.

Though Walletmor was incorporated in 2020, at least one individual was injected with the implant in 2019, according to a BBC article on the technology. According to the firm, it became the first company to offer this type of product for sale in 2021. In August 2022, the firm announced its 1,000th sale of this implant to a “male Walletmor Ambassador” in Finland.

After installation into their wrist, users link the implant up to an app called Purewrist. Purewrist’s prepaid Mastercard can then be loaded with money from your bank account or debit cards and used to make various financial transactions. Purewrist also offers the same contactless, cardless payment solution as Walletmor’s implant but with a wristband instead of an implant.

In terms of security, Walletmor claims its product is safer than traditional credit/debit cards for two reasons: First, being located inside your wrist, there are no card details for thieves to photograph or write down and use later to steal your money. Second, without getting it (or your forearm) removed, there is no way for you to lose the implant the way you would a credit/debit card.

However, the implant itself isn’t the only point of failure in the product. By routing through an app like Purewrist, the security of your personal information is only as secure as Purewrist’s cybersecurity standards. However, this is no different from having your card information stored on Venmo or Paypal.

That said, the implant doesn’t full-on replace your credit/debit card either. As mentioned, the app is linked to a prepaid Mastercard, meaning your cards are still necessary to load funds. Overall, the process doesn’t seem too dissimilar from buying a Visa gift card for yourself and continually loading it with money – but with invasive surgery involved.

Walletmor assures users that its technology “does not have GPS and no systems that allow you to spy on or track your location.”

Also see: Video Conferencing has Bloomed in a Time of Crisis 

Is There Potential Here?

In short, absolutely. Microchip implants have been around since 1998, but widespread adoption is still relatively new. The high risk, barrier of entry, and potential invasiveness of the technology have so far dampened customer enthusiasm. 

However, there are a number of existing applications of the technology. People have already used implants to store personal details, address books, and medical histories. There are also some whose chips can light up, bringing the human race as close as we’ll likely come to true bioluminescence, one of the coolest evolutionary innovations in all of nature.

For the B2B fintech space, there isn’t much to salivate over at the moment. The most likely use cases in this sector would be for access control. Unfortunately, the process of installation is too invasive to reasonably use at-scale, and any other viable use cases at this time fall more in the realm of individual use than organizational use. This perhaps is the reason why Walletmor has only sold 1,000 of these implants since 2021 as of this writing.

In order for these implants to be viable at an enterprise level, there would need to be ways to make the implant process as foolproof, cheap, and accessible as possible, and even in countries with excellent healthcare like South Korea or Japan, this is still not the case.

That said, realistically, there simply might not be room for B2B tech at this particular table. The surgery required to get the implant and the existence of the implant as part of a person’s body tend to push the technology more toward a personal, consumer-level product.

Unfortunately, we’re nowhere close to a Cyberpunk 2077 future of easily-available body modifications and implants. But hey, maybe in the near future, we can pay for a Starbucks coffee with our wrist. Or, some people can. For my part, I’ll likely stick to shoving my little piece of plastic into the machine until it beeps.

Next: Visa’s Michael Jabbara on Cybersecurity and Digital Payments

The post Inside Job? Hands Up if You Like Payment Implants appeared first on eWEEK.

]]>
The Benefits of Artificial Intelligence https://www.eweek.com/enterprise-apps/benefits-of-artificial-intelligence/ Tue, 23 Aug 2022 17:26:48 +0000 https://www.eweek.com/?p=221330 Across the world of enterprise and industry, few trends have seen as much rising investment as artificial intelligence (AI). From robotic process automation (RPA) to self-driving cars to AI-powered data analytics platforms, AI can be found almost everywhere you turn. Indeed, AI boasts use cases in practically every industry, whether it be healthcare, financial services, […]

The post The Benefits of Artificial Intelligence appeared first on eWEEK.

]]>
Across the world of enterprise and industry, few trends have seen as much rising investment as artificial intelligence (AI). From robotic process automation (RPA) to self-driving cars to AI-powered data analytics platforms, AI can be found almost everywhere you turn.

Indeed, AI boasts use cases in practically every industry, whether it be healthcare, financial services, or business administration.

AI as an industry is growing rapidly. Gartner expects worldwide AI sales to reach $62 billion in 2022. And a report from Grand View Research valued the global AI market at $93.5 billion in 2021 with a projected compound annual growth rate of 38.1% from 2022 to 2030.

With such a broad number of use cases and widespread expansion, there are numerous reasons that a company would use AI for their specific organization. The benefits listed below will offer a solid general primer about the ways AI adoption can bring overall improvements to an organization.

Also see: Top AI Software 

What is Artificial Intelligence? 

In brief, artificial intelligence is, in the words of AI pioneer John McCarthy, “the science and engineering of making intelligent machines.”

In a modern context, AI is typically a computer or computer program that can perform the sort of thinking that humans can. Depending on the AI, this can mean facial recognition, language processing, data analysis, or a number of other tasks and processes.

Like our own minds, artificial intelligence requires data and training in order to learn how to perform these tasks. This training can involve a variety of techniques, including deep learning and natural language processing.

AI’s effectiveness at performing these tasks varies based on:

  • What task it’s trying to perform.
  • The processing power it has to work with.
  • The type of training (data input and algorithm coding) it has received.

For example, AI programs have historically been very good at generating calculations from inputted data. On the other hand, despite the growing use of facial recognition software in certain parts of the world, AI still struggles to identify individuals of diverse backgrounds.

Ultimately, AI, as it is now, is a tool like any other, and tools work best in specific circumstances. It’s important to keep in mind what tasks you’re looking to automate and what tasks any specific AI specializes in when determining how an AI can benefit your organization.

Read More At: What is Artificial Intelligence?

Benefits of Artificial Intelligence

24/7 Operation

The most obvious benefit of using AI in any organization is that AI never needs time off. Aside from potential maintenance periods, AI needs no lunch breaks or vacations, and it is available for use whenever necessary.

If used in a customer service capacity, for example, AI can be perfect for businesses that operate internationally or across multiple time zones. Customers won’t have to wait until operating hours to potentially get the help they need.

For data analytics, this means a user can set up an AI model to run calculations overnight or after-hours without needing to physically be there to monitor its progress. This is a boon for companies that rely heavily on data analysis in their overall business strategy.

24/7 operation also benefits the healthcare industry. Doctors and nurses can utilize AI to run diagnostics, monitor metrics like patient glucose levels, and provide real-time data to both patient and doctor alike at any time of day. AI is also playing a role in the ongoing battle with the COVID-19 pandemic, providing around-the-clock predictions, processing, and contact tracing among other benefits.

Cost-Saving

As with 24/7 operation, AI’s human-less operation often means, when deployed right, it can save businesses on costs. As of right now, an AI doesn’t require a salary, and maintenance and training costs can be easily absorbed by a company under the right circumstances.

Some costs related to AI are going down as well. As an example, in the 2022 AI Index Report from Stanford University Human-Centered Artificial Intelligence, the cost of training an image classification system has dropped from $1,000 in 2017 to $4.60 in 2021.

AI-powered endpoint detection and response (EDR) software can also save organizations money by spotlighting potential cybersecurity risk factors early, allowing companies to get ahead and handle the issue before it becomes too difficult to manage.

Read More At: Why Autonomous, AI-based Software Tests Save Costs

Decision Making

Another potential benefit of AI is its ability to enhance organizational decision-making skills. The aforementioned EDR software is a good example of this, by helping companies make decisions that support the safety of their data.

One of AI’s strongest skills is that it can take in and analyze colossal datasets more quickly and more efficiently than the average human. By utilizing AI, businesses can better utilize more information than they would be able to otherwise. With so many industries adopting data-driven approaches, having access to AI can prove to be invaluable for a business looking to stay ahead of the pack.

With the right AI and the right training, companies can also find potential new revenue streams and avenues for growth and expansion they might have missed otherwise. Whether this is an investment firm finding a potential unicorn to invest in or a retailer launching a new product line to capitalize off of emergent trends, the insights provided by AI can help ensure a business is maintaining the steady growth they need to succeed.

Increase in Productivity

With AI and specifically with RPA, organizations can streamline and automate certain processes. This is especially true for repetitive, easy-to-understand tasks such as transaction execution, new-hire processing, and claims processing.

By automating these tasks, businesses are better able to leverage their human employees for the creativity, skill, and experience they were hired for in the first place. Less time performing repetitive tasks means more time doing the work that companies need to succeed, grow, and thrive.

In addition to automating repetitive tasks, AI can also be leveraged to enhance duties which still require a human touch. For example, it can be used to help shorten development cycles, reducing the time needed to take a product from conceptualization to commercialization. This enables firms to realize a faster return on investment (ROI) than they might see without using AI.

Read More At: Best Machine Learning Platforms 2022

Reduce Human Error

Human error is inevitable in any business, but under certain circumstances, AI can go a long way in reducing the likelihood of human error affecting an organization’s operation.

This can happen in a number of ways. For example, an AI can be used to check someone’s math to make sure their calculations on certain datasets are correct. For repetitive tasks like transaction processing, properly-trained AI can make sure the task is consistently completed with a lower margin of error than a human could manage.

For companies that operate off of employee schedules, AI can reduce human error by making sure enough employees are scheduled to meet the needs of each shift. Additionally, an AI deployment can track employee hours to make sure no one is working too much or taking too much time off for the needs of a business.

Future Forecast: the Benefits of Artificial Intelligence

As with any technology, there are a number of trends and advancements in the works that could potentially push AI even further than it has already gone.

On the theoretical side, researchers like Yann LeCun have put forward ideas on how to train AI to learn and think more like humans. Google subsidiary DeepMind AI has even developed PLATO, an AI which can learn physics concepts in roughly the same way an infant can.

Any advancements in the world of computing will also benefit the world of AI. For example, cloud computing has already greatly advanced AI in the 2010s, playing a large part in organizations’ abilities to adopt and utilize AI in their day-to-day operations.

Quantum computing could provide a similar leap in AI power and potential. Through quantum computing, AI could take in larger, more complex datasets than has been possible through other methods of computing.

Overall, however, the biggest trend with AI is its growing prominence in our day-to-day lives. From social media algorithms to customer support chatbots to virtual assistants like Alexa, we are seeing more and more organizations investing in AI and pushing it forward. As more time and money is put into the field, AI advancements could rapidly speed up.

With the future, very little is certain, but in the present, AI and its potential benefits are highly significant. 

Read Next: The Future of Artificial Intelligence

The post The Benefits of Artificial Intelligence appeared first on eWEEK.

]]>
History of Artificial Intelligence https://www.eweek.com/enterprise-apps/history-of-artificial-intelligence/ Tue, 19 Jul 2022 17:38:36 +0000 https://www.eweek.com/?p=221233 Of the myriad technological advances of the 20th and 21st centuries, one of the most influential is undoubtedly artificial intelligence (AI). From search engine algorithms reinventing how we look for information to Amazon’s Alexa in the consumer sector, AI has become a major technology driving the entire tech industry forward into the future. Whether you’re […]

The post History of Artificial Intelligence appeared first on eWEEK.

]]>
Of the myriad technological advances of the 20th and 21st centuries, one of the most influential is undoubtedly artificial intelligence (AI). From search engine algorithms reinventing how we look for information to Amazon’s Alexa in the consumer sector, AI has become a major technology driving the entire tech industry forward into the future.

Whether you’re a burgeoning start-up or an industry titan like Microsoft, there’s probably at least one part of your company working with AI or machine learning. According to a study from Grand View Research, the global AI industry was valued at $93.5 billion in 2021.

AI as a force in the tech industry exploded in prominence in the 2000s and 2010s, but AI has been around in some form or fashion since at least 1950 and arguably stretches back even further than that.

The broad strokes of AI’s history, such as the Turing Test and chess computers, are ingrained in the popular consciousness, but a rich, dense history lives beneath the surface of common knowledge. This article will distill that history and show you AI’s path from mythical idea to world-altering reality.

Also see: Top AI Software 

From Folklore to Fact

While AI is often considered a cutting-edge concept, humans have been imagining artificial intelligences for millenniums, and those imaginings have had a tangible impact on the advancements made in the field today.

Prominent mythological examples include the bronze automaton Talos, protector of the island of Crete from Greece, and the alchemical homunculi of the Renaissance period. Characters like Frankenstein’s Monster, HAL 9000 of 2001: A Space Odyssey, and Skynet from the Terminator franchise are just some of the ways we’ve depicted artificial intelligence in modern fiction.

One of the fictional concepts with the most influence on the history of AI is Isaac Asimov’s Three Laws of Robotics. These laws are frequently referenced when real-world researchers and organizations create their own laws of robotics.

In fact, when the U.K.’s Engineering and Physical Sciences Research Council (EPSRC) and Arts and Humanities Research Council (AHRC) published its 5 principles for designers, builders and users of robots, it explicitly cited Asimov as a reference point, though stating that Asimov’s Laws “simply don’t work in practice.”

Microsoft CEO Satya Nadella also made mention of Asimov’s Laws when presenting his own laws for AI, calling them “a good, though ultimately inadequate, start.”

Also see: The Future of Artificial Intelligence

Computers, Games, and Alan Turing

As Asimov was writing his Three Laws in the 1940s, researcher William Grey Walter was developing a rudimentary, analogue version of artificial intelligence. Called tortoises or turtles, these tiny robots could detect and react to light and contact with their plastic shells, and they operated without the use of computers.

Later in the 1960s, Johns Hopkins University built their Beast, another computer-less automaton which could navigate the halls of the university via sonar and charge itself at special wall outlets when its battery ran low.

However, artificial intelligence as we know it today would find its progress inextricably linked to that of computer science. Alan Turing’s 1950 paper Computing Machinery and Intelligence, which introduced the famous Turing Test, is still influential today. Many early AI programs were developed to play games, such as Christopher Strachey’s checkers-playing program written for the Ferranti Mark I computer.

The term “artificial intelligence” itself wasn’t codified until 1956’s Dartmouth Workshop, organized by Marvin Minsky, John McCarthy, Claude Shannon, and Nathan Rochester, where McCarthy coined the name for the burgeoning field.

The Workshop was also where Allen Newell and Herbert A. Simon debuted their Logic Theorist computer program, which was developed with the help of computer programmer Cliff Shaw. Designed to prove mathematical theorems the same way a human mathematician would, Logic Theorist would go on to prove 38 of the first 52 theorems found in the Principia Mathematica. Despite this achievement, the other researchers at the conference “didn’t pay much attention to it,” according to Simon.

Games and mathematics were focal points of early AI because they were easy to apply the “reasoning as search” principle to. Reasoning as search, also called means-ends analysis (MEA), is a problem-solving method that follows three basic steps:

  • Ddetermine the ongoing state of whatever problem you’re observing (you’re feeling hungry).
  • Identify the end goal (you no longer feel hungry).
  • Decide the actions you need to take to solve the problem (you make a sandwich and eat it).

This early forerunner of AI’s rationale: If the actions did not solve the problem, find a new set of actions to take and repeat until you’ve solved the problem.

Neural Nets and Natural Languages

With Cold-War-era governments willing to throw money at anything that might give them an advantage over the other side, AI research experienced a burst of funding from organizations like DARPA throughout the ’50s and ’60s.

This research spawned a number of advances in machine learning. For example, Simon and Newell’s General Problem Solver, while using MEA, would generate heuristics, mental shortcuts which could block off possible problem-solving paths the AI might explore that weren’t likely to arrive at the desired outcome.

Initially proposed in the 1940s, the first artificial neural network was invented in 1958, thanks to funding from the United States Office of Naval Research.

A major focus of researchers in this period was trying to get AI to understand human language. Daniel Brubow helped pioneer natural language processing with his STUDENT program, which was designed to solve word problems.

In 1966, Joseph Weizenbaum introduced the first chatbot, ELIZA, an act which Internet users the world over are grateful for. Roger Schank’s conceptual dependency theory, which attempted to convert sentences into basic concepts represented as a set of simple keywords, was one of the most influential early developments in AI research.

Also see: Data Analytics Trends 

The First AI Winter

In the 1970s, the pervasive optimism in AI research from the ’50s and ’60s began to fade. Funding dried up as sky-high promises were dragged to earth by a myriad of the real-world issues facing AI researching. Chief among them was a limitation in computational power.

As Bruce G. Buchanan explained in an article for AI Magazine: “Early programs were necessarily limited in scope by the size and speed of memory and processors and by the relative clumsiness of the early operating systems and languages.” This period, as funding disappeared and optimism waned, became known as the AI Winter.

The period was marked by setbacks and interdisciplinary disagreements amongst AI researchers. Marvin Minsky and Frank Rosenblatt’s 1969 book Perceptrons discouraged the field of neural networks so thoroughly that very little research was done in the field until the 1980s.

Then, there was the divide between the so-called “neats” and the “scruffys.” The neats favored the use of logic and symbolic reasoning to train and educate their AI. They wanted AI to solve logical problems like mathematical theorems.

John McCarthy introduced the idea of using logic in AI with his 1959 Advice Taker proposal. In addition, the Prolog programming language, developed in 1972 by Alan Colmerauer and Phillipe Roussel, was designed specifically as a logic programming language and still finds use in AI today.

Meanwhile, the scruffys were attempting to get AI to solve problems that required AI to think like a person. In a 1975 paper, Marvin Minsky outlined a common approach used by scruffy researchers, called “frames.”

Frames are a way that both humans and AI can make sense of the world. When you encounter a new person or event, you can draw on memories of similar people and events to give you a rough idea of how to proceed, such as when you order food at a new restaurant. You might not know the menu or the people serving you, but you have a general idea of how to place an order based on past experiences in other restaurants.

From Academia to Industry

The 1980s marked a return to enthusiasm for AI. R1, an expert system implemented by the Digital Equipment Corporation in 1982, was saving the company a reported $40 million a year by 1986. The success of R1 proved AI’s viability as a commercial tool and sparked interest from other major companies like DuPont.

On top of that, Japan’s Fifth Generation project, an attempt to create intelligent computers running on Prolog the same way normal computers run on code, sparked further American corporate interest. Not wanting to be outdone, American companies poured funds into AI research.

Taken altogether, this increase in interest and shift to industrial research resulted in the AI industry ballooning to $2 billion in value by 1988. Adjusting for inflation, that’s nearly $5 billion dollars in 2022.

Also see: Real Time Data Management Trends

The Second AI Winter

In the 1990s, however, interest began receding in much the same way it had in the ’70s. In 1987, Jack Schwartz, the then-new director of DARPA, effectively eradicated AI funding from the organization, yet already-earmarked funds didn’t dry up until 1993.

The Fifth Generation Project had failed to meet many of its goals after 10 years of development, and as businesses found it cheaper and easier to purchase mass-produced, general-purpose chips and program AI applications into the software, the market for specialized AI hardware, such as LISP machines, collapsed and caused the overall market to shrink.

Additionally, the expert systems that had proven AI’s viability at the beginning of the decade began showing a fatal flaw. As a system stayed in-use, it continually added more rules to operate and needed a larger and larger knowledge base to handle. Eventually, the amount of human staff needed to maintain and update the system’s knowledge base would grow until it became financially untenable to maintain. The combination of these factors and others resulted in the Second AI Winter.

Also see: Top Digital Transformation Companies

Into the New Millennium and the Modern World of AI

The late 1990s and early 2000s showed signs of the coming AI springtime. Some of AI’s oldest goals were finally realized, such as Deep Blue’s 1997 victory over then-chess world champion Gary Kasparov in a landmark moment for AI.

More sophisticated mathematical tools and collaboration with fields like electrical engineering resulted in AI’s transformation into a more logic-oriented scientific discipline, allowing the aforementioned neats to claim victory over their scruffy counterparts. Marvin Minsky, for his part, declared that the field of AI was and had been “brain dead” for the past 30 years in 2003.

Meanwhile, AI found use in a variety of new areas of industry: Google’s search engine algorithm, data mining, and speech recognition just to name a few. New supercomputers and programs would find themselves competing with and even winning against top-tier human opponents, such as IBM’s Watson winning Jeopardy! in 2011 over Ken Jennings, who’d once won 74 episodes of the game show in a row.

One of the most impactful pieces of AI in recent years has been Facebook’s algorithms, which can determine what posts you see and when, in an attempt to curate an online experience for the platform’s users. Algorithms with similar functions can be found on websites like Youtube and Netflix, where they predict what content viewers want to watch next based on previous history.

The benefits of these algorithms to anyone but these companies’ bottom lines are up for debate, as even former employees have testified before Congress about the dangers it can cause to users.

Sometimes, these innovations weren’t even recognized as AI. As Nick Brostrom put it in a 2006 CNN interview: “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore.”

The trend of not calling useful artificial intelligence AI did not last into the 2010s. Now, start-ups and tech mainstays alike scramble to claim their latest product is fueled by AI or machine learning. In some cases, this desire has been so powerful that some will declare their product is AI-powered, even when the AI’s functionality is questionable.

AI has found its way into many peoples’ homes, whether via the aforementioned social media algorithms or virtual assistants like Amazon’s Alexa. Through winters and burst bubbles, the field of artificial intelligence has persevered and become a hugely significant part of modern life, and is likely to grow exponentially in the years ahead.

The post History of Artificial Intelligence appeared first on eWEEK.

]]>
The Future of Artificial Intelligence https://www.eweek.com/enterprise-apps/artificial-intelligence-future/ Mon, 18 Jul 2022 18:51:18 +0000 https://www.eweek.com/?p=221229 Artificial intelligence’s impact on the world has already been felt in a variety of ways, from chess computers to search engine algorithms to a chatbot so convincing that a Google researcher thinks it’s sentient. So what’s the future of AI?  Obviously, the future of AI can’t be predicted any more than tomorrow’s lottery numbers can. […]

The post The Future of Artificial Intelligence appeared first on eWEEK.

]]>
Artificial intelligence’s impact on the world has already been felt in a variety of ways, from chess computers to search engine algorithms to a chatbot so convincing that a Google researcher thinks it’s sentient. So what’s the future of AI? 

Obviously, the future of AI can’t be predicted any more than tomorrow’s lottery numbers can. But even as the research in the field drives the technology further and further, we can put our futurist caps on and speculate about what the world might look like in an AI-driven future.

For clarity, we’ll focus on the AI advances in the world of business, yet we’ll also paint a picture of the world at-large as well.

Also see: Top AI Software 

The Future of AI in Popular Culture

Fiction can have an impact on real-world scientific research. Isaac Asimov’s Three Laws of Robotics – laid out in his short story Runaround – have been part of the discussion of ethics in AI since the field began, even if modern ethical discussions tend to view Asimov’s laws as a fair but lacking starting point.

In these fictional portrayals, there is much anxiety about AI’s use as a weapon. Arguably the most famous fictional instance of AI is either HAL 9000 of 2001: A Space Odyssey or the Terminators from the franchise of the same name. Both properties deal with AI trying to kill humans by any means necessary.

However, AI is just as often portrayed as heroic as monstrous, though its weapon status is often still at the forefront. Many readers might remember The Iron Giant, wherein a 50-foot-tall alien robot struggles with its identity and the United States military before ultimately deciding it’d rather be Superman than a weapon.

These anxieties over AI-as-weapon, well-founded or not, are influential on modern AI-related policy. As recently as 2019, the U.N. was discussing the banning of lethal autonomous weapon systems (LAWS), which calls to mind the exact sort of “killer robot” anxieties present in our fiction.

AI Today: An Overview

AI has become a prevalent piece of modern life. When you search something on Google, you’ll be dealing with its Multitask Unified Model (MUM) AI algorithm, the latest in a series of AI at the core of Google’s search engine. If you own Amazon’s Alexa or a similar home virtual assistant, you’ve brought an AI into your home.

Focusing on the business uses, AI is basically everywhere. Technology like chatbots for customer service, autonomous fraud detection, and automated invoice processing are in use by companies both large and small the world over. In fact, this very article was written in Google Docs, which has an AI-driven Smart Compose feature that is widely used by many individuals.

Almost every major tech company in the world has a department actively researching or implementing AI, if not multiple departments. There are countless new AI startups around the world that offer AI-driven software-as-a-service platforms that claim they can save businesses money. The world of business, especially the tech industry, is awash with artificial intelligence and machine learning.

Also see: Digital Transformation Guide: Definition, Types & Strategy

Uses Driving Future AI 

So, why do these companies use AI so much, and why are there so many startups springing up to offer these sorts of AI-powered services to consumers and executives alike? The easy answer is that AI is a trend with ups and downs, and it’s currently trending upward in terms of interest as a viable business technology.

In fact, Grand View Research projects that AI will grow at a compound rate of 38.1% annually from 2022 to 2030.

Beyond trends, there are viable use cases for AI that are driving the future of AI. As far back as the 1980s, major U.S. companies were using expert systems to automate certain tasks to great effect.

For instance, robotic process automation (RPA) uses AI or machine learning to automate simple repetitive tasks, such as the aforementioned invoice processing. When properly implemented, it can be a tremendous cost-saving tool, especially for small-to-midsize businesses (SMBs) that can’t afford to pay a human to perform the same tasks. Expect this use case to grow greatly in the future. 

Additionally, many companies use algorithms to help optimize a user’s experience, such as customer service chatbots or Google Photos’ automatic image enhancement feature. For the former, it provides 24/7 service, which no single person could provide on their own. For the latter, it can eliminate the human error present in Photos’ manual image enhancement, leading to more consistent improvements to pictures.

Customer-facing AI can have its drawbacks, however. Google Photos’ photo-tagging algorithm has been infamously inaccurate in the past, and anyone who’s had to talk to a chatbot for IT support knows how unhelpful they can be if you don’t know the exact way to communicate with it. But advances in this technology will certainly drive future AI developments. 

Also see: Best Data Analytics Tools 

Moving Toward Human-Like Learning in AI

Any discussion of the future in artificial intelligence will inevitably turn to the idea of AI recreating human-like learning and growth patterns or of attaining a version of sentience. Since the field first took off in the 1950s, this concept has dominated the discussion of AI, both within the field and without.

Award-winning computer scientist and chief AI scientist at Meta, Yann LeCun, published a paper in late June 2022 discussing his own vision of how machines can begin thinking like humans. In it, LeCun proposes using the psychological concept of world models to let AI replicate the way humans can intuitively predict the consequences of certain actions.

An example LeCun uses is the difference between a self-driving car and a human driver. A self-driving car might need multiple instances of failure to learn that driving too fast while turning is a bad idea, but the human driver’s instinctive knowledge of physics would tell them that going too fast while turning probably won’t end well.

Throughout the paper, LeCun builds out how this concept could be replicated for an AI. He proposes a six-module architecture for the AI, where each module feeds into each other to replicate the way all parts of the human brain feed into each other to create our observations and models of the world.

While LeCun himself admits to limitations and flaws with his proposal, the paper was written for readers with minimal tech or mathematical knowledge, allowing readers of any industry to be able to understand the potential of AI with human-like thinking patterns.

LeCun is obviously not the only one looking to the future for AI, and in fact, some researchers for Google subsidiary DeepMind AI have managed to develop PLATO, an AI that roughly replicates the way infants learn simple physics concepts. 

Outside Advancements Driving AI’s Future

If we only look at advancements within AI itself, that does not paint a complete picture. Technological advancement doesn’t occur in parallel silos, and a cross-disciplinary field like AI is particularly affected by the state of the tech around it.

Cloud computing, for example, has gone a long way in making AI more accessible. The infrastructure and services provided by cloud computing mean practitioners no longer need to build and maintain separate infrastructure for their AI platforms.

This goes both ways as well, as some developers have used AI to help push cloud computing forward. Such integration allows for streamlined data access, access to automated data mining from cloud servers, and other benefits. 

Quantum computing, built on the principles found in quantum physics, can allow AI to process larger and more complex datasets than is possible with traditional methods of computing.

IBM is a leader in this field and, in May 2022, unveiled their roadmap to building quantum-centric computers with some of the first quantum-based software apps, predicted to begin development by 2025. As quantum computing develops and becomes more accessible, it’s likely that AI starts making leaps in advancements as well. 

Also see: Why Cloud Means Cloud Native

AI’s Potential Impact on the World

AI has already had a significant impact on the world around us in the 21st century, but as more research and resources are put into the field’s advancement, we will begin to see even more of AI’s influence in our day-to-day lives.

In the world of healthcare, AI provides medical professionals the ability to process increasingly-large datasets. Researchers were using AI modeling to aid in the development of COVID-19 vaccines from the beginning of the pandemic. As AI advances and becomes more accessible, AI will likely be used to combat other diseases and ailments.

Manufacturing is a classic example of AI and automation reshaping the world. The idea of computers taking blue collar jobs in this industry is ingrained in the minds of a lot of people in the U.S. And indeed, automation has driven job losses in some industrial scenarios. 

In reality, computers aren’t taking everyone’s jobs en masse, but advancements in AI might add still more automation to the process. We could very well see AI not just produce manufactured goods but perform quality checks to ensure products are fit to ship with minimal human oversight.

With many businesses shifting to work-from-home and hybrid arrangements, AI (specifically RPA) can be used to automate some of the more repetitive tasks necessary in an office setting, such as customer support. This can provide human employees more time to analyze and develop creative solutions to the sort of complex problems they were hired to solve.

Banking and financial services already use AI, but its impact is on the way these companies analyze data, provide financial advice, and detect fraud. As AI gets more advanced, we could see banks further leverage AI to maintain and facilitate the many services they provide, such as loans and mortgages.

AI’s Growing Influence

The tech industry as a whole is constantly pushing for progress, and artificial intelligence has been one of the pillars of that progress throughout the 21st century. As advancements are made and research conducted, AI’s influence over the industry and the world will likely only grow.

As noted, we see AI in our everyday lives via search engine algorithms and virtual assistants, but as industries like banking and healthcare begin adopting more and more AI-powered software and solutions, AI could be the most important field of tech in our world today. Provided another AI winter doesn’t show up to cool the field off again.

That said, whether it’s quantum computing taking its capabilities to new heights or the enduring dream of human-like intelligence, the future of AI is expected to play a deeply significant role in consumer and business markets. 

Also see: Real Time Data Management Trends

The post The Future of Artificial Intelligence appeared first on eWEEK.

]]>