Guest Author, Author at eWEEK https://www.eweek.com/author/guest-author/ Technology News, Tech Product Reviews, Research and Enterprise Analysis Thu, 05 Oct 2023 18:46:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.3 Companies Divided After Launch of ChatGPT Enterprise https://www.eweek.com/artificial-intelligence/companies-divided-chatgpt-enterprise/ Thu, 05 Oct 2023 18:28:36 +0000 https://www.eweek.com/?p=223122 ChatGPT Enterprise has divided opinions amongst companies. Explore how this new AI chatbot has impacted businesses, and why it's causing so much controversy.

The post Companies Divided After Launch of ChatGPT Enterprise appeared first on eWEEK.

]]>
OpenAI launched ChatGPT Enterprise as a more secure version of its ChatGPT large language model. Specifically, ChatGPT Enterprise is designed to be more secure and compliant with corporate privacy regulations than the prior versions of ChatGPT/GPT 4.0, supporting features like SSO and usage insights.

The enterprise AI market is divided on OpenAI, with some companies eager to adopt the technology and others hesitant. Enterprise sentiment on using OpenAI and its GPT-suite of products can be categorized as follows:

Early adopters

These companies are eager to leverage OpenAI’s offerings and are already doing so. Clearly, this group is inspired by this year’s remarkable focus on AI, and are willing to get on board and “learn as we go.”

Also see: ChatGPT Enterprise: AI for Business

Uncertain

Other companies are waiting to see how the market for generative AI offerings develops, and will choose their model accordingly.

They may have some concerns about data privacy or are not yet convinced how the technology will impact their current business processes. A “wait and see” approach is further bolstered by the budget cuts many companies have put in place over the last 12 months.

Never

The remaining companies have already decided they will never use OpenAI. This is typically due to data privacy, compliance or business protection reasons. However, these “nevers” may arguably shift their stance as some of these data concerns are addressed, and/or the urgent need to deploy AI drives decision makers to take some calculated risks.

Also see: ChatGPT: Understanding the ChatGPT ChatBot

Specialized use cases

This includes many industries with highly specialized focus, including healthcare, construction and legal. It is unclear when or even if OpenAI will start focusing on these specific areas. Some companies would prefer to start with an open source model and focus on training that for specialized purposes.

Privacy/regulatory reasons

While ChatGPT Enterprise does guarantee they are not using data from customer prompts to train OpenAI models, there have been other solutions that offer similar guarantees, including from OpenAI’s partner Microsoft.

For some firms, licensing OpenAI could trigger a new process which would require all existing clients to sign off on new terms. The cost of doing so might outweigh the incremental product improvement gained from adopting OpenAI.

Betting on faster development and the future of open source

Open source models are rapidly advancing, and OpenAI still does not allow access to certain methods like Reinforcement Learning from Human Feedback (RLHF).

While open source models are currently behind GPT 3.5/4.0, they are improving every month and some industry experts believe they will catch up to commercial offerings in 2024. Given the timelines for enterprise clients, they would rather place a long-term investment in developing their own model rather than adopting OpenAI for the short-term.

Also see: Top Generative AI Apps and Tools

Worried about OpenAI’s legal troubles

Corporations such as The New York Times and Barry Diller’s IAC, and individuals such as authors and stand-up comedians like Sarah Silverman, are suing OpenAI over scraping their data to train OpenAI.

While the outcome from these lawsuits are far from a given, some companies are hesitant to bet their future business and product offerings on an API whose legality is being called into question.

Doubts about changing performance issues

While OpenAI is making rapid improvements to its models like ChatGPT Enterprise, many are also observing that its capabilities have changed over the last few months. Even beyond the issues of hallucinations and non-deterministic answers, it’s challenging to build a product on a foundation that may change for reasons outside your control or knowledge.

Bottom Line: The Future of ChatGPT Enterprise

Overall, the enterprises remain split on integrating OpenAI in their business model. The launch of ChatGPT Enterprise may convince some companies to switch to OpenAI, but there are still a number of factors that could prevent adoption.

In the face of these challenges, OpenAI remains a leading player in the enterprise AI space. In a recent survey, of 1,000 general population adults who have heard of artificial intelligence, ChatGPT was recognized by 49% of respondents (although only 22% said they had heard of OpenAI).

The company is continuing to improve its models and expand its offerings, and it is likely that OpenAI and generative AI as a whole will continue to gain traction across industries in the years to come.

Read next: Generative AI Companies: Top 12 Leaders 

About the author: 

Ivan Lee is the CEO of Datasaur

The post Companies Divided After Launch of ChatGPT Enterprise appeared first on eWEEK.

]]>
Generative AI: 5 Things Business Leaders Must Know https://www.eweek.com/artificial-intelligence/generative-ai-5-things-business-leaders-must-know/ Wed, 04 Oct 2023 21:19:41 +0000 https://www.eweek.com/?p=223124 Generative AI is transforming the way businesses think about AI. Learn 5 key insights business leaders need to know about this revolutionary technology.

The post Generative AI: 5 Things Business Leaders Must Know appeared first on eWEEK.

]]>
From writing software code to optimizing business operations, generative AI is driving innovation and efficiency across the business world. This transformative technology comes with a steep learning curve for some businesses, which face a number of decisions. These include deciding whether to build or buy, gauging how much to spend, and considering generative AI trust and guardrails issue.

Generative AI creates completely new data from existing datasets. This data can be in a variety of formats, including text, images, music and even code—and it can produce results that weren’t in the original input. This versatility allows for an impressive range of creative problem-solving and efficiency-boosting applications.

Let’s look at five things every business leader needs to understand about generative AI to maximize their return on investment:

1) Generative AI is Revolutionizing Business and Job Functions

Generative AI is driving business innovation and efficiency. The technology’s ability to generate new data from an input  allows for an impressive range of applications.

Generative AI in the workplace isn’t merely about harnessing data or driving efficiency — it’s about fundamentally reshaping the approach to work. Yes, generative AI can make work faster and easier, but it also opens up entirely new possibilities for innovation and creativity.

What’s key about generative AI technology is its potential to be tailored to the needs of each business. Every business has its own unique or priority datasets.

By identifying their specific needs , businesses can use generative AI to gain a competitive edge. For instance, applied in healthcare, generative AI can help analyze patient data, create a summary of the key points, and then use this summary to draft an action plan for delivering better care.

Also see: 100+ Top AI Companies

2) Build vs. Buy

One critical business decision that enterprises face when implementing artificial intelligence is whether to build or buy the software. There are several factors to consider, including:

  • Unique data and domain expertise: Custom generative AI becomes valuable when businesses own unique data or specific domain expertise and aim to use  that data effectively.
  • Availability of talent: The skills required to build AI tools are specialized and in high demand. Enterprises need to consider whether they have access to this talent and whether they’re more familiar with open-source software or commercial offerings.
  • Project timeline: Building AI tools takes time. If an enterprise needs AI functionality ready quickly, buying it is the best bet. If the time frame is more flexible, building it might be the more practical option.
  • Integration with existing software: It’s crucial to consider whether the vendors an enterprise works with can integrate the AI tools under consideration. Understanding the organization’s procurement process and allowing enough time for it can save the project from unexpected delays.
  • Cost-effectiveness: While it may initially appear more affordable to develop AI tools internally, it’s essential to factor in expenses related to team building, software acquisition and long-term maintenance. If the tool serves as a crucial competitive advantage for the business, it may justify ongoing costs. However, if similar functionality is already available through a tool the company is currently paying for, it doesn’t make financial sense to build and maintain redundant software.

3) Identify the Industry Use Case

Generative AI has a wide range of applications across a variety of industries. Common ones include text generation for crafting captivating marketing copy, summarization for condensing extensive news articles and emails, and image generation for developing unique brand visuals or gaming characters.

AI-powered chatbots are enhancing customer service with intelligent, real-time responses in almost every industry, while translation capabilities are making information accessible across languages. In the coding world, generative AI boosts efficiency by dynamically generating comments and functions.

Healthcare is using generative AI to reduce drug discovery timelines. Game developers are using it to create dynamic game characters. The financial sector is using it to bolster security against transaction fraud and for algorithmic trading. Retail is using generative AI for automatic price optimization.

Businesses find success most quickly when they have an initial project that aligns with their unique needs. For help getting started, ask technology providers for examples they can share from companies with similar project requirements.

Also see: Best Artificial Intelligence Software

4) Investing in Generative AI

Implementing generative AI requires substantial financial investment. It involves procuring high-end infrastructure, hiring skilled talent, assembling the necessary software components and gathering the needed data.

That said, not every enterprise needs to invest in generative AI from scratch. Many are using pre-trained models, or foundation models, which are more cost-effective to train because businesses only have to augment the model with their own data and apply guardrails associated with their brand.

While the cost of implementing generative AI can be high, the investment return of increased efficiency, innovation and competitive advantage can make it worthwhile.

5) Ensuring Trust and Safety in Generative AI Models

As generative AI grows more advanced and widespread, guaranteeing the safety and responsible use of AI models becomes increasingly paramount. Large language model guardrail tools — software applications that keep generative AI models from deviating from their intended purpose — can help developers set boundaries on AI models. These can be set in three areas:

  • Topical guardrails: These prevent AI models from veering into undesired areas of use. For example, a customer service assistant powered by generative AI could be prevented from answering questions about the weather if that function isn’t within its designated scope.
  • Safety guardrails: These ensure that AI models provide accurate and appropriate responses. They can filter out unwanted language and make sure that references are made only to credible sources.
  • Security guardrails: These restrict the AI model’s connections to only known, safe external third-party applications.

Also see: Generative AI Companies: Top 12 Leaders 

Harnessing the Power of Enterprise Generative AI

Generative AI is poised to transform the world, offering vast opportunities for companies to innovate, improve efficiency and solve complex problems. As business leaders, understanding and using the power of generative AI will be vital to staying competitive in the rapidly evolving business landscape.

Following the five principles above will help businesses stay ahead of the curve and reap the rewards that this AI technology has to offer.

About the Author: 

Amanda Saunders, Senior Manager, AI Software at NVIDIA

The post Generative AI: 5 Things Business Leaders Must Know appeared first on eWEEK.

]]>
How to Automate Brand-Licensing Processes in an Enterprise https://www.eweek.com/it-management/how-to-automate-brand-licensing-processes-in-an-enterprise/ Mon, 24 May 2021 18:06:00 +0000 https://www.eweek.com/?p=218978 Brand-licensing partnerships are the lifeblood of many of today’s biggest enterprise brands. They’ve become a business imperative for any business looking to broaden its brand reach and a novel way to connect with consumers. From movies (product placements) to sports events (sponsorships) to famous characters, brands often have hundreds—sometimes thousands—of licensing partnerships that account for […]

The post How to Automate Brand-Licensing Processes in an Enterprise appeared first on eWEEK.

]]>
Brand-licensing partnerships are the lifeblood of many of today’s biggest enterprise brands. They’ve become a business imperative for any business looking to broaden its brand reach and a novel way to connect with consumers. From movies (product placements) to sports events (sponsorships) to famous characters, brands often have hundreds—sometimes thousands—of licensing partnerships that account for a significant portion of their revenues.

While licensing is big business—according to Amazon the licensing industry is poised to grow to $1 trillion by the end of this decade—the process is largely archaic and highly fragmented. Each relationship a brand has with a merchandiser represents an entirely separate revenue stream, each with its own set of data, individual departments and stakeholders and complex workflows and processes.

The result: Most enterprise brands lack the data and insight to answer basic questions, such as: Which partnership is performing the best? Which should we rethink? On which ones should we double down? How long does it take for a product to get to market? How much am I owed in royalties? Where are my products being sold?

The bottom line is this lack of visibility is costing brands billions in missed opportunities, including the potential to accelerate products to market, to inform new product lines and more. But what you can’t see, you can’t maximize. That’s why enterprise brands need a centralized view into the performance of their brand-licensing partnerships in order to drive and maximize brand-partnership revenues, and a CRM (customer relationship management) system to granularly manage every step of the partnership lifecycle; every interaction; every relationship; and every revenue stream.

In this eWEEK Data Points article, Kalle Torma, CEO of Flowhaven, uses industry information to share five ways data and automation are modernizing the brand-licensing process.

Data Point No. 1: Break down communication silos.

Any technology solution deployed to manage the brand-licensing agreement process must leverage data and automation to provide robust centralized, and accessible content management. This ensures that your licensing departments can access information, activity and performance metrics. Licensing team alignment is especially important for organizations that want to establish and service partnerships. The system should make it easy to create and share new reports to improve collaboration and communication between partners.

Data Point No. 2: Maximize revenues by gaining granular visibility.

Any technology solution deployed to improve the brand-licensing process must bring together information from different teams and departments to offer a holistic view of each partner in real time. Seeing everything at once gives customer-facing employees who work in sales, account management, product approvals and royalty management superpowers when it comes to making quick and informed decisions on everything from identifying new opportunities to improving the quality of communication and responsiveness.

Data Point No. 3: Strengthen partner and customer relationships.

Managing your company’s relationships with those who buy and use your company’s products and services has never been more important. Simply put, your partners want smart tools that make working with you easy. They also want to feel like you’re hyper-obsessed with meeting your goals. But establishing good licensing relationships requires more than just hard work. Companies need to use modern technologies and tools to get the most out of their relationships. Smart systems offer a single application where you, your team and your partners can spend time every day performing work. They also give you a place to manage and analyze your interactions with past, current, and potential licensing partners to make better-informed decisions about the future

Data Point No. 4: Uncover new business opportunities.

Leveraging the data and automation capabilities provided by an effective technology solution will help you identify new opportunities—and find and keep new partners.

Data Point No. 5: Streamline tedious backend processes.

A new technology solution will allow you (and your team) to avoid dealing with manual workarounds for simple tasks that could easily be automated. You will also gain the ability to consolidate data on one platform—data that is now stored in spreadsheets or multiple systems across the licensing team. You will also be able to automate workflows (including agreement approvals) and make changes to the software to fit your company’s processes.

Data Point No. 6: In summary …

Implementing an effective technology solution will allow you to maximize your data and automate processes previously completed manually, which in turn will help you create great brand licensing relationships and build partner loyalty. Any new solution you choose should also help you organize business-critical information and improve productivity across your team. That is the glue that holds sales, account management, brand assurance and product approvals, sales, and royalty management teams together. Improved communication and collaboration across those teams will undoubtedly pave the way for further business growth. Whether you’re a small, medium or large-sized company, maximizing your data and leveraging automation whenever possible is critical to your company’s success.

Guest author Kalle Törmä is CEO of Helsinki-based Flowhaven. 

The post How to Automate Brand-Licensing Processes in an Enterprise appeared first on eWEEK.

]]>
Choosing the Right IT Services Firm for a Post-COVID Paradigm https://www.eweek.com/enterprise-apps/choosing-the-right-it-services-firm-for-a-post-covid-paradigm/ Fri, 14 May 2021 22:05:40 +0000 https://www.eweek.com/?p=218915 Few, if any, industries escaped the effects of the COVID-19 pandemic. Even in the technology sector, where change is a normal part of business, many professional services and technology consulting firms struggled to cope with the disruption. That seems almost paradoxical given that organizations across just about every market sector had to embrace work-from-home operations […]

The post Choosing the Right IT Services Firm for a Post-COVID Paradigm appeared first on eWEEK.

]]>
Few, if any, industries escaped the effects of the COVID-19 pandemic. Even in the technology sector, where change is a normal part of business, many professional services and technology consulting firms struggled to cope with the disruption.

That seems almost paradoxical given that organizations across just about every market sector had to embrace work-from-home operations or other business changes and implement the technology resources — including cloud services — to support them. But some of the declines in business for service providers were simply due to a ripple effect. Many companies in industries such as tourism, manufacturing, and hospitality cut costs during the pandemic to stay afloat.

Now, as most economic indicators begin to trend positive, companies are once again ramping up their IT projects. Long term solutions are necessary at the scale required for sustained operations and cost effectiveness. Innovation and creative problem solving are both critical to help ensure that IT investments made today can accommodate what’s now and what’s next.

Companies must choose the right type of IT services firm to implement the technology solutions they need to survive — and thrive — in a rapidly evolving business climate. Those needs must be met if companies are to adapt to a different way of doing business. In this article, we look at three options – IT consulting firms, IT outsourcing companies, and cloud systems integrators – We review the pros and cons of each, and make the case that cloud systems integrators may be the best option for the new IT paradigm.

Option 1: The IT Consulting Firm

Pandemic or not, hiring a big-name IT consulting firm often comes with an equally big price tag. Forgoing work with these higher-priced companies or simply cancelling projects reduces expenditures.

There’s also the reality that many of the larger consultancies really aren’t positioned to efficiently or cost effectively help organizations implement necessary IT strategies. Their areas of expertise lie in assurance services, taxation, management consulting, advisory, actuarial, corporate finance and legal services, to name a few.

Through mergers and acquisitions, they’re able to offer some IT implementation services. Where they shine on the technology front, however, is more in terms of big-picture IT strategy and enterprise-wide digital transformation and less on the actual execution.

They may have the staff to take on something like a cloud migration or cloud-native application development project. But technology implementation is not a strategic part of their business, so they aren’t necessarily eager to take on these types of projects — particularly if they aren’t of a scale that would allow them to make a sizable profit.

As is the case with other services delivered by the larger consulting groups, overhead and additional factors tend to drive costs up. It can be difficult for them to price their services competitively compared to smaller or niche technology companies. They also aren’t likely to invest in hiring the best technology implementation talent. As such, the service quality they deliver may not justify the expense for many organizations.

Yet another potential downside: larger firms tend to be less agile than their smaller counterparts. While they sell innovation to their customers, they are often too big and cumbersome to embrace the new ways of working and thinking that they promote on the consulting side. They have to rely on proven, repeatable methodology to remain efficient — even if it’s at the expense of better project outcomes for their customers.

Option 2: IT Outsourcing

Another option that is often used is outsourcing, which is when a company hires external resources – which are often cheap offshore or commodity offshore organizations – to manage various IT functions. Companies that specialize in IT outsourcing aren’t faring much better than the big consultancies. While they may be more cost effective than a large professional services provider or consultancy, many potential clients are becoming hesitant about working with them.

Too many times the old adage “you get what you pay for” has proven true when dealing with low-cost IT services. These companies still tend to lag behind in terms of the leading-edge methodologies that can generate better solutions and more successful projects.

Like the large technology consulting and professional services companies, the IT outsourcing companies also tend to rely on standard processes that can be repeated over and over to keep costs down. There’s no room for innovation. Many prospective customers want to hire a company that will offer up better options rather than just getting a job done as quickly and cheaply as possible.

Option 3: Cloud Systems Integrators

A viable option may be to go with a mid-sized company that specializes in technology implementation — particularly in the areas of cloud migration and cloud-native app development.

These are firms that focus solely on cloud technologies and services. They have the essential in-depth expertise and experience. Because they’ve been exclusively working in the area of cloud technologies, methodologies such as DevOps and agile development are standard ways of doing business.

Cloud systems integrators are able to be more innovative than larger companies that have no wiggle room for experimentation or exploring new methodologies. They’re continually seeking and trying out new and better ways of overcoming technology challenges and solving problems.

These companies develop solutions that can meet current requirements and adapt as those new requirements emerge. For many, their work for a customer is just the beginning of a much longer partnership that will evolve to help that customer continually leverage cloud technologies to meet changing needs.

Yet another advantage many of these cloud-centric service providers have over the large consulting companies and IT outsourcing firms is their niche focus. The big consultancies and outsourcing companies tend to be IT generalists. Each of the major cloud platforms offer hundreds of tools and services. It would be difficult for even the biggest among the outsourcing companies and consultancies to gain and maintain deep expertise in the full spectrum of resources offered by all of the cloud platforms.

However, many of the cloud-centric companies have chosen to focus their efforts on specific platforms, such as Amazon Web Services (AWS) or Microsoft, enabling them to gain familiarity with and expertise in the many tools and services those particular platforms offer. They can effectively leverage those resources to help their customers implement targeted, effective solutions.

Choose Your IT Service Provider Wisely

That’s not to say there still isn’t a place for the large technology consulting firms or IT outsourcing companies. But for companies that want the flexibility, scalability and cost benefits that cloud services provide, working with a company specializing in cloud technologies – as a true, long-time strategic business partner, rather than just a point-in-time technical resource – may be the optimal solution.

About the author:

Pavel Pragin is the CEO of ClearScale.

The post Choosing the Right IT Services Firm for a Post-COVID Paradigm appeared first on eWEEK.

]]>
Why Secure App Delivery Should Be a Requirement https://www.eweek.com/security/why-secure-app-delivery-should-be-a-requirement/ Thu, 13 May 2021 21:59:03 +0000 https://www.eweek.com/?p=218909 The explosion of digital life in the COVID-19 era has made network traffic balancing, optimization and security a critical priority for businesses, schools and other organizations. After a long year, there are signs of hope that the pandemic may be beginning to ease, but the need for increased bandwidth, security and connectivity is here to […]

The post Why Secure App Delivery Should Be a Requirement appeared first on eWEEK.

]]>
The explosion of digital life in the COVID-19 era has made network traffic balancing, optimization and security a critical priority for businesses, schools and other organizations. After a long year, there are signs of hope that the pandemic may be beginning to ease, but the need for increased bandwidth, security and connectivity is here to stay.

While COVID-19 sparked a surge in work-from-home, school-from-home and shop-from-home as a matter of necessity, digital transformation had been moving in this direction long before the pandemic began, and it will continue long after the crisis has passed. Enterprises will continue to field a large remote workforce to reduce facilities costs and increase employee flexibility. Schools from K–12 to universities will continue to use the investments and best practices of the past year to broaden their reach. For many consumers, the convenience of online shopping, banking and other tasks will remain compelling even as in-person options expand.

We’ve already seen a rapid expansion of hybrid cloud strategies to enable a more agile response to changing needs. In addition, consumption models are also changing; vendors are now required to provide flexible solutions for on-premises and hybrid scenarios as well as flexible licensing. But with these changes come new challenges for IT, including ensuring availability, maintaining security, optimizing performance and simplifying management across a more complex infrastructure.

Solving these challenges is a key mandate for CIOs to support new ways of operating—not just during a pandemic, but for the long term. In key IT applications, there is also a clear need to operate in a hybrid environment by taking advantage of growing cloud infrastructure while maintaining control of important business processes. This approach also provides investment protection and addresses the skill gap in cybersecurity while taking advantage of the massive growth in cloud infrastructure.

Surging demand drives a rise in hybrid cloud

The increased use of online applications, websites and services during the past year has had a marked impact on network traffic. In recent surveys conducted by Gatepoint Research and A10 Networks, 47 percent of financial services firms reported rising application services traffic; among e-commerce respondents, a full 86 percent saw an increase—including more than one-third seeing more than 20 percent growth. IT has been quick to respond.

The cloud has played a large role in this effort. In those surveys, 60 percent of e-commerce businesses planned to move applications to the public cloud in the next three years, while 49 percent of financial services firms already hosted applications primarily in the cloud.

At the same time, significant investments in legacy data center hardware make a cloud-only strategy unfeasible for many organizations. As a result, IDC expects that by 2022, more than 90 percent of enterprises worldwide will rely on a mix of on-premises and cloud environments—a hybrid strategy that increases flexibility and scalability, though at a potential cost of greater complexity.

As both shifting demand and ongoing digital transformation drive organizations quickly down the path to hybrid cloud and multi-cloud infrastructure, the application delivery challenges now facing CIOs include four critical areas:

  • ensuring continual application availability;
  • securing applications against rising threats;
  • optimizing performance for a high-quality user experience, and
  • maintaining visibility and manageability across a more complex and diverse infrastructure.

Ensuring continual application availability

In the digital age—and not just during a pandemic—applications are the lifeblood of many organizations. Any downtime can bring employee productivity to a halt, alienate customers and leave students stranded. As organizations move beyond legacy architectures to a more software-centric strategy, automation and artificial intelligence offer opportunities to remove layers of complexity and human error, and thus improve the consistency of application delivery. Advanced server load balancing can help ensure that applications are consistently and reliably available, while global server load balancing extends availability by intelligently distributing application traffic across multiple geographic locations. Redundancy throughout the infrastructure will be key to ensure uninterrupted service as companies look to re-channel their infrastructure investments into new revenue streams.

Securing applications against rising threats

As cyber threats grow in frequency and sophistication, applications, networks and data are vulnerable to risks that include web exploits, distributed denial of service (DDoS) attacks, ransomware, phishing, data theft and threats hidden in encrypted traffic. It’s critical for organizations to maintain effective protection across application delivery infrastructures wherever they are located.

The transformation of network environments has spurred interest in Zero Trust security, in which access controls are extended throughout the environment rather than being limited to a hardened network perimeter; in the A10 Networks surveys, more than 40 percent of financial services firms already had established a timeline to introduce Zero Trust. A complete defense-in-depth strategy should also include measures across the hybrid cloud infrastructure, such as advanced load balancing, web application firewalls (WAF), authentication access management, malicious traffic and bot mitigation, integrated DDoS protection with real-time threat intelligence, data center firewalls and TLS/SSL offload.

To keep employees productive, customers satisfied and students engaged, organizations have to provide an exceptional and dependable user experience. User experience software can play a valuable role in meeting the expectation for a high level of speed and responsiveness, with optimization to enhance performance across applications, geographically distributed data centers and clouds from different vendors.

Maintaining network management and visibility

In a modern hybrid infrastructure, device-by-device and environment-by-environment management is too complex and too manual to ensure reliable service. IT needs to be able to achieve a holistic view into devices, applications, policies, users, and more across data centers and clouds. In the A10 Networks surveys, a majority of both financial services and e-commerce companies cited addressing management complexity as a key priority, with many calling out a lack of visibility across cloud data centers. These challenges can have a direct impact on cost, compliance, security, and more.

In response, organizations are seeking to modernize application infrastructure by managing on-premises and cloud deployments together as a unified system, rather than within separate silos. This can enable greater simplicity, agility, security and consistency in the way applications and policies are managed.

The ongoing upgrade of infrastructure may have been accelerated by COVID-19, but its necessity has been growing for years. The benefits it delivers will continue long into the future.

Guest author Dhrupad Trivedi is President and Chief Executive Officer of A10 Networks and an expert in the field of application load balancing. Trivedi holds a Ph.D. in electrical engineering from University of Massachusetts, Amherst, a master’s degree in electrical engineering from University of Alabama and an MBA in finance from Duke University.

The post Why Secure App Delivery Should Be a Requirement appeared first on eWEEK.

]]>
Eight Best Practices for Securing Long-Term Remote Work https://www.eweek.com/security/eight-best-practices-for-securing-long-term-remote-work/ Thu, 13 May 2021 19:13:11 +0000 https://www.eweek.com/?p=218907 Organizations may face a number of potential emergency situations, such as illnesses, floods, natural disasters, power outages, and even cybercrime. Implementing a business continuity plan in the face of such disasters is essential to ensuring that the organization is capable of maintaining operations in spite of adversity. Often, responding to such emergency situations requires massive […]

The post Eight Best Practices for Securing Long-Term Remote Work appeared first on eWEEK.

]]>
Organizations may face a number of potential emergency situations, such as illnesses, floods, natural disasters, power outages, and even cybercrime. Implementing a business continuity plan in the face of such disasters is essential to ensuring that the organization is capable of maintaining operations in spite of adversity.

Often, responding to such emergency situations requires massive efforts from the IT team. This is not just about keeping the network up and running, but also ensuring that data and resources are secure. In fact, the security implications of making what often amounts to a dramatic transition in a short period of time cannot be overstated.

Covid-19 is an example of where organizations around the globe relied on their IT teams to quickly implement dramatic shifts and scale to maintain business continuity, and in an unprecedented manner and timeframe. Under normal circumstances, moving an entire workforce from corporate networks to home networks, with all of the risks of an unpredictable home environment, would take significant planning and preparation. But time was of the essence.

The rapid transition to remote work did not come without its risks to organizations. Cybersecurity has always been a dynamic space, and responding to the COVID-19 pandemic has reinforced the idea that effective cybersecurity must include the ability to adapt to changing environments and evolving threat strategies.

But in this era of rapid digital transformation, the response to the pandemic simply accelerated the inevitable. Beyond 2020 it will remain essential for IT planning to include and account for hybrid IT. Business users will still need to access critical applications from increasingly distributed data centers that extend across a hybrid IT infrastructure. In fact, it will become more important than ever. Workflows and data will not only exist, but expand across on-premise networks, co-location environments, and private and public clouds—and this broad distribution of valuable and vulnerable content will continue to create an ever-expanding attack surface for organizations.

Gartner predicts that organizational plasticity and IT adaptability will be the central strategic technology trend that businesses should plan for in 2021. Enterprises will have assets in their own data centers, some in private clouds, and other assets in a number of public cloud environments. And the mix of these asset allocations will be dynamic. As a result, organizations will no longer have a single compute model – now or in the future.

While the specific details across industries may vary, what is certain is that organizations need to plan now to support both remote and in-person work well into the future. And that means cybersecurity teams must make sure that infrastructures are prepared to address all scenarios, including utilizing a security-driven networking approach that converges networking and security to protect the enterprise at every edge, from network to cloud.

Cybersecurity and the Remote Workforce: The Data says…

To explore the challenges organizations faced as a result of the shift in remote work, and examine how organizations are planning to secure their remote workforce moving forward, Fortinet conducted a survey and issued the 2020 Remote Workforce Cybersecurity Report. This analysis was conducted mid-way through this incongruous year, surveying security leaders across industries—including the public sector—in 17 different countries.

It has eight specific areas of focus:

1. The Sudden Shift to Remote Work Was Challenging for Most Organizations

As expected, the rapid shift to a new work paradigm was not easy. Nearly two-thirds of businesses had to transition over half of their workforce to remote work practically overnight. And to complicate matters further, only 40% of organizations had a business continuity plan in place prior to the pandemic.

But as a result of this rapid shift to remote work, 32% have now invested further in this area. These investments are critical to ensure continued operations not just now, but for future crises as well. Those organizations that did not a remote worker strategy in place quickly recognized the need for one.

General lack of preparation resulted in 83% of organizations finding this transition to be moderately, very, or extremely challenging. Organizations faced the most significant difficulties when it came to secure connectivity, followed by business continuity assurance and access to business-critical applications. 40% of those surveyed ended up spending more on skilled IT workers to support the additional reliance of remote workers on IT staff to troubleshoot issues, enable security, and ensure productivity for employees working from home.

2. Cyber Attackers Saw Telework as an Opportunity

Inherent cybersecurity challenges of moving workers outside the traditional perimeter were exacerbated by the unprecedented cyber threat activity that resulted from an increased reliance on personal device usage. Almost overnight, cybercriminals shifted their focus to target those workers outside the corporate network. The spike in employees remotely connecting to the corporate network led directly to an increase in breach attempts and overall cyberattacks targeting remote workers, endpoint devices and vulnerable home networks. The report shows that organizations identified the most challenging aspects of this transition as being ensuring secure connections, maintaining business continuity, and providing secure access to business-critical applications.

From opportunistic phishers to scheming nation-state actors, cyber adversaries found multiple ways to exploit the global pandemic for their benefit, often at enormous scale, as evidenced by a recent FortiGuard Labs Global Threat Landscape Report. Threats included new phishing and business email compromise schemes, modified and new ransomware attacks, and even nation-state backed campaigns. In fact, according to the 2020 Remote Workforce Cybersecurity Report, 60% of organizations revealed an increase in cybersecurity breach attempts during the transition to remote work, while 34% reported actual breaches in their networks.

During this time, the FortiGuard Labs team documented an average of about 600 new phishing campaigns per day during the spring. And because home users were no longer protected by corporate security devices, web-based malware became the most common attack vehicle, outranking email as the primary delivery vector used by cybercriminals for the first time in years.

3. Defending the Dynamic Perimeter

Network security today is at a turning point because perimeter-based security is no longer sufficient. Expanding security surfaces and compute demands, new edges and edge devices— including the WAN Edge, data center edge, multi-cloud edge, and even home edge, and increasing network complexity makes managing threats practically untenable. Given the volume of cyber threats targeting remote workers, and the indication that cybercriminals are aggressively targeting the expanding attack surface, organizations need to carefully consider what technologies and approaches are needed to secure remote work and an increasingly dynamic perimeter moving forward. In particular, defense strategies need to be adjusted to fully account for the extension of the network perimeter into the home.

4. Securing Different Types of Users

Not every employee in an organization requires the same level of access to company resources when working remotely. Organizations should tailor securing telework to each remote worker:

  • Basic teleworker. The basic teleworker usually only requires access to email, internet, teleconferencing and similar business applications, limited file sharing, and function-specific capabilities (finance, HR, etc.) from their remote work site. This includes access to Software-as-a-Service (SaaS) applications in the cloud, such as Microsoft Office 365, as well as a secure connection to the corporate network. Basic teleworkers should connect to the organization using a VPN and use multifactor authentication (MFA).
  • Power user. These are employees that require a higher level of access to corporate resources while working from a remote location. This may include the need to access critical or sensitive information, use bandwidth-intensive applications such as teleconferencing plus screen sharing, or simultaneously connecting to corporate resources using multiple devices. Power users include system administrators, IT support technicians, and emergency personnel. For these power users, deployment of a dedicated access point at their alternate work site provides the consistent access, reliable performance, and level of security that they require. This secure access point should also deliver protected wireless connectivity to the corporate network through a secure tunnel.
  • Super user. A super user is an employee that frequently processes extremely sensitive and confidential information. They require the highest level of security as they access confidential corporate resources, even when working from an alternate office location. This employee profile includes administrators with privileged system access, emergency personnel, and executive management. For these super users, their alternate work site should be configured as an alternate office location, creating a secure enclave within their home network.

5. Securing Remote Work: Best Practices

While many organizations have made improvements in the securing of their remote workforces, survey data reveals several best practices that should be considered for improving secure remote connectivity. These include:

  • Multi-factor Authentication (MFA). While the survey revealed that 65% of organizations had some level of VPN solution in place pre-pandemic, only 37% of those used MFA. While VPNs play an important role in ensuring secure connectivity, they are simply one part of securing access. If not already in place, it is recommended that organizations consider integrating MFA into their remote security plans to prevent cybercriminals from spoofing remote workers to gain unauthorized access to network resources.
  • Network Access Control (NAC) and Endpoint Security. As more employees work remotely, organizations have seen the need to control the influx of non-trusted devices on their networks. As a result, 76% of organizations now plan to acquire or upgrade their NAC technologies. By adopting NAC solutions, IT teams gain increased visibility and control over the users and devices on their network. Organizations also have concerns over the security of remote worker endpoint devices and the risks they introduce once they have been granted network access. This is why 72% of organizations also plan to acquire or enhance endpoint security with endpoint detection and response (EDR) solutions. EDR solutions deliver advanced, real-time threat protection for endpoints both pre- and post-infection.
  • Software-Defined Wide Area Networking (SD-WAN) for the Home. According to the data, 64% of organizations plan to either upgrade or adopt SD-WAN, with many of them now targeting home office use in addition to branch deployments. The critical advantage of extending secure SD-WAN functionality to individual teleworkers, especially super users, is that they can enjoy on-demand remote access, secure Wi-Fi for better home office flexibility, and dynamically scalable performance regardless of their local network availability through redundant connections that leverage things like LTE and 5G.
  • Intent-based Segmentation. Traditional network-based segmentation strategies tend to stop at the edge of each network environment. Instead, intent-based network segmentation supports the explosive adoption of IoT and mobile devices, as well as applications and services from multiple clouds, by extending security policies beyond the network edge across multiple networked environments. 60% of organizations, for example, plan to upgrade or invest in segmentation to support an inverted network model by extend segmentation functionality into the home.
  • Skilled Security Professionals. While 73% of organizations stated their intention to invest further in skilled IT workers over the next 24 months, the historical lack of skilled IT security professionals could present a challenge as accelerated cloud demand exacerbates shortage of cloud and security architects.

6. Securing Remote Work: Cyber Education is Critical

Now more than ever, employees should understand the part they play in their organization’s security posture.

Organizations need to adopt a cyber education that includes educating remote workers on how to keep themselves, their data and resources, and the organization safe. Many cybersecurity awareness and training courses are currently free during this pandemic, whether non-technical courses targeted to teleworkers and their families, or more advanced training to educate advanced users about enhanced protection and visibility across every segment, device, and appliance on the network, whether virtual, in the cloud, or on-premise.

It’s highly recommended that all teleworkers – technical or not – take time to educate themselves about proper security protocols to keep themselves and their organizations safe.

7: Enterprises Must Adapt to Secure Remote Work for the Long-Term

According to the 2020 Remote Workforce Cybersecurity Report, nearly a third of organizations anticipate that more than half of their employees to continue working remotely full-time after the pandemic. As a result, security leaders must carefully consider what technology and strategies are required to secure telework well into the future. Temporary fixes and solutions must be made permanent, with an eye towards flexibility, scalability, and security.

8. The Future of Work: IT Flexibility

According to Gartner analysts, there are nine top strategic technology trends that businesses should plan for in 2021, and organizational plasticity is the overarching message. Brian Burke, research vice president at Gartner, explained, “What we’re talking about with the trends is how do you leverage technology to gain the organizational plasticity that you need to form and reform into whatever’s going to be required as we emerge from this pandemic.”

Hybrid IT continues to be a key element that organizations need to incorporate in their IT planning because there is no one compute model. Very few companies will be cloud-only or only have a data center. Even if an organization is a very cloud focused, there are still endpoints that must be secured – especially with today’s highly remote workforce – and those endpoints are part of the organization’s network.

To address this challenge, enterprises need to invest in security solutions that provide the flexibility they need to support evolving networks and shifting priorities. Organizations must be able to cope with growing attack surfaces, advanced threats, increased infrastructure complexity, and an expanding regulatory landscape while also adapting their business to evolving consumer and competitive demands.

To achieve their desired business outcomes, while effectively managing risks and minimizing complexities, organizations need to adopt a cybersecurity platform that provides broad visibility across their environment and a means to easily manage both security and network operations, ensure full integration to enable automation for end-to-end protection, and that can operate seamlessly and consistently across multiple, highly dynamic environments.

To achieve this, the convergence of infrastructure and security (Security-driven Networking) has emerged as one of the most important concepts for today’s networking and security teams. It offers organizations the ability put security anywhere on any edge by weaving security and advanced network functionality into a single, highly responsive solution.

This next-generation approach is essential for effectively defending today’s highly dynamic environments—not only by providing consistent enforcement across today’s highly flexible perimeters, but by also weaving security deep into the network itself. It is also designed to encompass the entire network development and deployment life cycle, ensuring that security functions as the central consideration for all business-driven infrastructure decisions, now and into the future.

About the author: 

Peter Newton is senior director of products and solutions, IoT and OT at Fortinet. He has more than 20 years of experience in the enterprise networking and security industry and serves as Fortinet’s products and solutions lead for IoT and operational technology solutions, including ICS and SCADA.

The post Eight Best Practices for Securing Long-Term Remote Work appeared first on eWEEK.

]]>
How Fast Object Storage is Aiding the Cloud Transformation Journey https://www.eweek.com/enterprise-apps/how-fast-object-storage-is-aiding-the-cloud-transformation-journey/ Thu, 13 May 2021 18:48:51 +0000 https://www.eweek.com/?p=218905 The events of last year caused organizations to shift gears quickly, often accelerating the timeline for their cloud strategy – including the adoption or expansion of remote work capabilities. The cloud handily demonstrated its business value as it enabled organizations to keep going. In fact, Gartner found that nearly 70% of organizations that use cloud […]

The post How Fast Object Storage is Aiding the Cloud Transformation Journey appeared first on eWEEK.

]]>
The events of last year caused organizations to shift gears quickly, often accelerating the timeline for their cloud strategy – including the adoption or expansion of remote work capabilities.

The cloud handily demonstrated its business value as it enabled organizations to keep going. In fact, Gartner found that nearly 70% of organizations that use cloud services today intend to increase their cloud spending for this reason.

Because speed was of the essence, many organizations quickly erected short-term cloud scaffolding, some using almost a lift-and-shift approach. That worked as a short-term fix, but what’s needed now is a plan for the long haul of the cloud transformation journey. Because data is a key asset for corporations during this journey, this is where fast object storage can play a significant part.

In this article, we’ll look at six important data points and trends related to cloud transformation and fast object storage, provided by the experts at Scality.

Data Point No. 1: Cloud spending is up

Gartner forecasts that overall cloud services spending around the world will grow by nearly 25% in the next four years. And, cloud is projected to make up 14.2% of the total global enterprise IT spending market in 2024.

Data Point No. 2: Vendor lock-in is a major challenge

As the pandemic observes its first birthday, many companies are just beginning to consider their longer-term cloud strategy. But they must take into account certain challenges to overcome in the long term. One of them is cloud vendor lock-in, like the hardware/software lock-in of old. While many companies are talking about a hybrid or multi-cloud approach, this isn’t always actually happening, largely because of issues related to vendor lock-in.

Data Point No. 3: App refactoring is a must

App refactoring is another challenge to long-term cloud success. Refactoring involves running your applications on your cloud provider’s infrastructure – in other words, you have to completely re-architect your applications to better fit and optimize for the new cloud or cloud-native environments. You have to ensure that while making application code changes, you don’t affect the external behavior of the app. And that gets dicey.

Data Point No. 4: Object storage offers scale and speed

“Secondary” data — backups and long-term archives — have been the purview of object storage in the last decade. That’s because many have held to the outdated idea that while object storage could offer advantages in scale, it lacked the performance capability for higher-performance applications.

But that’s no longer the case. Today’s object storage can provide very high levels of throughput (fast delivery of data per second), whether in the cloud or on-premises,  especially as new object storage solutions leverage flash media. For a wide range of applications managing unstructured data, object storage is in fact their primary storage solution.

Data Point No. 5: Big Data needs new management

As the volume, variety and velocity of data continue to increase, it will soon become unmanageable using traditional methods. That includes spreading all the data across multiple clouds, creating storage tiers in silos. The volume of data and the number of places you keep it result in two dimensions of unmanageability.

Data Point No. 6:  Fast object storage means better management

Rather than holding data in silos, a better approach is to use a single tier of storage that is fast enough to be considered the main tier for (at least) 80% of your data. Object storage can now offer this: a single tier that is fast enough, big enough and cost-effective enough to hold everything.

The time for long-term business solutions has come, and the short-term cloud strategies of 2020 must be transitioned to meet long-term needs. More apps mean more data, which means more storage. For a solution that makes that data accessible in a budget-friendly way, fast object storage offers clear advantages.

About the Author:

Paul is CPO at Scality. He is an expert in Cloud Computing, Object Storage, NAS & file systems, data management and database technologies.

The post How Fast Object Storage is Aiding the Cloud Transformation Journey appeared first on eWEEK.

]]>
Customer Empathy: Four Data Points for Understanding How the Pandemic Has Impacted the Customer Experience (CX) https://www.eweek.com/innovation/pandemic-impact-customer-experience-data-points/ Mon, 10 May 2021 14:08:23 +0000 https://www.eweek.com/?p=218874 The pandemic has impacted the customer experience in countless ways. Many everyday experiences that used to be physical (think dining out, buying groceries, or going to the doctor) have become partially—or completely—digital. Companies operating from a place of empathy for their customers created exceptional digital experiences during the pandemic that will no doubt continue to […]

The post Customer Empathy: Four Data Points for Understanding How the Pandemic Has Impacted the Customer Experience (CX) appeared first on eWEEK.

]]>
The pandemic has impacted the customer experience in countless ways. Many everyday experiences that used to be physical (think dining out, buying groceries, or going to the doctor) have become partially—or completely—digital. Companies operating from a place of empathy for their customers created exceptional digital experiences during the pandemic that will no doubt continue to provide value in a post-pandemic world. 

The data points below can help companies gain a better understanding of how the pandemic has affected CX, and what they can do to meet and exceed customer expectations now and in the future. 

Data Point No. 1: Feedback has gone remote, and there’s no going back

Prior to the pandemic, most companies relied on a mix of both in-person and remote methods to gather customer feedback and conduct research. 

Naturally, in the wake of the pandemic, most feedback and research have since been gathered remotely. However, according to a recent study, this change may be here to stay. Many CX teams expect remote methods to not only remain, but overtake in-person methods even when meeting face-to-face is safe again.

This is a great thing for customer-centric teams, who have learned that empathy can be achieved remotely, and taking a remote-first approach to customer and user feedback is a powerful strategy that promotes a customer-centric culture.

Data Point No. 2: The time for digital transformation is now

The pandemic has accelerated most companies’ digital transformation efforts. In 2019, only 56 percent of businesses noted they were either in progress or had completed their digital transformation, compared to a whopping 71 percent in 2020. 

The pandemic served as a powerful incentive for CX teams to dedicate more time and resources to digital transformation in an effort to improve the customer experience. Digital transformation initiatives that were on the roadmap years into the future were suddenly thrown into the spotlight as teams pivoted to find ways to connect with their customers through more digital channels.

Data Point No. 3: Companies are doubling down on CX

One side effect of the pandemic is that it stripped CX teams of the luxury of analysis paralysis. As companies shifted their priorities to meet the needs of customers in a completely new environment, anything that didn’t directly impact CX or the bottom line instantly became a lower priority. 

This helped teams to cut through the noise and step up their CX game. Although many customer experiences have changed, consumer expectations remain at an all time high. It’s for this reason that 72 percent of companies plan to increase the frequency of their customer feedback and research to meet changing customer needs in 2021, and beyond. 

Data Point No. 4: CX teams need to get resourceful 

Nearly 70 percent of companies report that either their spending or workforce was reduced as a result of the pandemic. Adding to this challenge, over half (53 percent) noted that their workload had increased since the start of the pandemic. Today’s CX teams have to do more with less.

Leading CX teams are working harder than ever to meet their customers where they’re at. Lack of time and resources, or increased demand and workload can’t get in the way of creating an amazing customer experience. Now more than ever, CX teams must work smarter and more efficiently to stay competitive and continue to exceed customer expectations. 

The pandemic has accelerated changes within the industry that were already underway, pushing customer-centric cultures and strategies into the forefront, no longer as options, but as a necessary means of survival for every company.

Janelle Estes is Chief Insights Officer at UserTesting, an on-demand human insight platform.

The post Customer Empathy: Four Data Points for Understanding How the Pandemic Has Impacted the Customer Experience (CX) appeared first on eWEEK.

]]>
Why It’s Critical to Manage Privileges and Access Across Your Multi-Cloud Environments https://www.eweek.com/enterprise-apps/why-its-critical-to-manage-privileges-and-access-across-your-multi-cloud-environments/ Wed, 05 May 2021 18:46:31 +0000 https://www.eweek.com/?p=218844 Conventional approaches to privileged access and identity management are ineffective in today’s cloud-oriented DevSecOps environments. The concept of least privilege access still remains foundational – and traditional privileged access solutions can deliver effective security in situations where development and operations are segregated, and on-premises architecture predominates. It is not enough, however, to simply grant permanent […]

The post Why It’s Critical to Manage Privileges and Access Across Your Multi-Cloud Environments appeared first on eWEEK.

]]>
Conventional approaches to privileged access and identity management are ineffective in today’s cloud-oriented DevSecOps environments. The concept of least privilege access still remains foundational – and traditional privileged access solutions can deliver effective security in situations where development and operations are segregated, and on-premises architecture predominates.

It is not enough, however, to simply grant permanent standing privileges to a human or non-human user, even if they are limited to only those permissions needed to do their jobs. Especially now, when teams are dispersed and working remotely, credentials are proliferating in the cloud (outside of on-premises security protocols) and are more exposed to theft or abuse.

With DevSecOps teams now commonly working across many clouds, each with their own permission sets and usage models, we need to rethink how we manage privileged access. Let’s consider the individual issues that are preventing DevSecOps teams from easily securing access to cloud resources, and explore potential remedies to these challenges.

In this eWEEK Data Points article, we discuss the four reasons it’s critical to manage privileges and access across your multi-cloud environments.

Data Point 1: Insufficient privilege management

The longstanding approach to cybersecurity in on-premises environments included ringfencing of users and assets—such as firewalls to keep out unwanted network traffic. Conversely, in cloud environments, it’s not possible to ringfence every application, resource, device, or user. Digital identity defines the new perimeter.

The problem is the new identity-defined perimeter has made managing access privileges magnitudes more critical than ever before. In addition, the privileged access and identity management practices optimized for on-premises situations are ineffective in today’s cloud-oriented continuous integration and continuous delivery (CI/CD) DevSecOps environments.

Recommendation: Today’s dynamic privileging platforms designed to support just-in-time (JIT) privilege grants enable DevSecOps teams to maintain a Zero Standing Privilege (ZSP) security posture in a way that accelerates, not slows, the CI/CD development process.

When dynamic privileging platforms are integrated with existing security tools, such as user and entity behavioral analytics (UEBA) and advanced security information and event management (SIEM) engines, DevSecOps teams can gain deep visibility into cloud application events and access changes.

These capabilities are critical in enabling DevSecOps to get a complete picture of user activity, making it possible to identify threatening user behavior to which security teams must respond. When events occur, administrators can quickly act to protect critical information and cloud services from breaches.

Data Point 2: Attack surface sprawl

Companies today use hundreds or thousands of cloud services, and a typical DevSecOps operation can easily generate thousands of data access events every day. The result is that each human and machine user ends up having multiple identities and standing privilege sets sitting vulnerable to exploitation.

Recommendation: Again, as with core security concerns, the automated granting and expiring of permissions—JIT privilege grants—is highly effective at minimizing attack surfaces. These JIT/ZSP solutions work on the concept of Zero Trust, which means no one and nothing is trusted with standing access to your cloud accounts and data. With JIT permissioning, elevated privileges can extend either for the duration of a session or task, for a set amount of time, or when the user no longer needs access.

Once the task is complete, those elevated privileges are automatically revoked–all without sys-admin involvement. Where a user previously had standing access privileges potentially extending around the clock for months at a time, converting to JIT granting would compress that attack surface to several hours per month. Further, JIT permissioning largely frees organizations from having to maintain and pay for both privileged and non-privileged accounts. Dynamic secrets generation also provides a better model for securing temporarily deployed services and features.

Data Point 3: Unmanaged privilege drift.

User privileges tend to expand and change organically over time. This circumstance has long been recognized as a potential source of vulnerability in conventional privileged access solutions. In multi-cloud environments, privilege drift becomes exponentially more difficult to manage and keep consistent, and is far more likely to result in over-privileged users.

Recommendation: Enforce least privilege access (LPA) by automating privilege right-sizing. Dynamic privilege granting enables organizations to automatically monitor and adjust privileges to ensure users have only the privileges needed to do their jobs. As such, security admins can quickly survey assigned privileges in order to identify “blind spots” such as over-privileged users and machine identities. With insight like this across clouds, it becomes possible – with security oversight – to remove privileges where they’re not needed and right size privileged access overall.

Data Point: 4: Lack of centralized control

Privileges differ from cloud service to service, necessitating learning each service separately and implementing privilege sets. Additionally, many DevSecOps organizations have had to rely on externally stored or hardcoded credentials—and end up struggling to manage privileges across a diversity of disconnected secure vaults.

Recommendation 1: A more effective approach is to manage secrets through a central management solution, providing DevSecOps teams with real-time availability to all elements of secrets infrastructure across cloud and across secrets vaults, including certificates, keys, and tokens.

Recommendation 2: Employing a unified cross-cloud access model makes it possible to manage privilege sets across cloud services. Centralized provisioning automates privileging processes across all cloud resources, dramatically reducing the likelihood of errors that can place accounts and data at greater risk.

Conclusion

DevOps and DevSecOps are still new and fast-evolving concepts within the wider computer science and cybersecurity universe. No doubt, DevOps has been wildly successful in accelerating automation and speeding time to market for innovative applications and business services. To date, however, security solutions providers have struggled to accelerate privileged access solutions that could secure the devices, data, and resources used by DevOps teams, especially in cross-cloud environments. Dynamic privileging platforms using just-in-time (JIT) privilege grants and employing Zero Standing Privilege (ZSP) principles show great promise in solving these problems.

About the Author:

Art Poghosyan, CEO of Britive

The post Why It’s Critical to Manage Privileges and Access Across Your Multi-Cloud Environments appeared first on eWEEK.

]]>
Five Hidden Costs in Code Migration: How to Avoid Surprise Expenses https://www.eweek.com/uncategorized/five-hidden-costs-in-code-migration-how-to-avoid-surprise-expenses/ Wed, 05 May 2021 18:13:28 +0000 https://www.eweek.com/?p=218842 Migration to the cloud is in full flight. The most complex challenge involved in completing these migrations is not moving the data but migrating the data processing code to work on new infrastructure in the cloud. The fundamental challenge is to create code that performs the same business process or returns the same result on […]

The post Five Hidden Costs in Code Migration: How to Avoid Surprise Expenses appeared first on eWEEK.

]]>
Migration to the cloud is in full flight. The most complex challenge involved in completing these migrations is not moving the data but migrating the data processing code to work on new infrastructure in the cloud.

The fundamental challenge is to create code that performs the same business process or returns the same result on the new platform, rather than simply making the old code run on the new platform. This traditionally involves a long, manual process of copying the data, converting the code, testing the code and verifying that the migrated code has the same behavior as the original code.

Philosophically there are three different migration approaches – listed in increasing order of risk:

1. Lift and shift

  • Move the existing code functionality to the cloud

2. Lift, adjust and shift

  • Complete some code redesign during the migration

3. Total redesign

  • Rebuild everything from the ground up

Whatever approach you take, keep in mind the Five Hidden Costs in code migration:

Hidden Cost #1 – Underestimating the scope of the challenge

Migration projects are large, and timelines are unpredictable. The decision to migrate must take into account the cost, but without an accurate estimate of the challenge timelines slip and costs balloon:

Questions to ask include:

Where is the data processing code and how much code is there?

  • The obvious place to start is in data processing pipelines, but customized code can be found in all manner of places, including BI reporting tools like Tableau.

Does everything need to be migrated? Is some code being run for no good purpose?

  • As a result of history and growth, frequently a significant portion of data processing code has been running for no good reason.

How do you structure and plan a large migration?

  • Converting the code and testing must be staged. Best in class migrations include a solid, driven roadmap based on end to end data processing. Ideally a DAG that covers all the processing code is available to plan the migration in logical phases.

Hidden Cost #2 – Target Platform Qualification

Can the target platform actually perform everything in a similar way to the source platform? If not then major code rewrites are required, project timelines slip and costs over-run.

Typically the target platform has been tested and qualified based on a selection of data processing pipelines. Inevitably when the entire codebase is explored features not available on the target platform are found. Working around these problems requires the support of subject matter experts (SMEs) on the source platform – access to the wide variety of SMEs become blocking issues which delay the code migration.

With accurate, automated code conversion, an entire code base can be qualified against the target platform without relying on the memory of the SME. Discovering all potential issues early in the process helps scope/cost/solve Amdahl’s law.

Hidden Cost #3: The Non-Standardization of SQL

“Standard SQL” is a myth. It’s tempting to imagine that because a syntax is legal in any two SQL dialects, that it means the same thing and returns the same value. A manual code translation, particularly by an engineer who is not an SME in both dialects, will tend to focus on successful execution first, and rely on a long tail of testing and validation to confirm that the correct calculation is being performed on the new platform. This takes time, and is prone to error.

The hidden cost is not only delays in migrating code, but also issues/errors that are discovered long after the migration is complete. A typical manual migration has errors show up for around a year after switching to the new platform, and it’s common for an automated migration to discover bugs in SQL code left over from a previous migration, or otherwise latent and undiscovered in core infrastructure code.

Hidden Cost #4: Quality and Consistency, or Monday Morning and Friday Afternoon

The cost of ownership of code is not only in authoring and testing it, but also maintaining the code. If quality and consistency is not maintained then there will be ongoing, unexpected maintenance costs on the target platform.

Migration teams must learn accurate, effective and consistent coding patterns on an unfamiliar target platform. With an obvious steep learning curve, the result is often inconsistent code quality, particularly in the first pipelines migrated. This problem is exacerbated at the start of the project resulting in technical debt on what should be a fresh, clean platform. Issues are most frequently encountered  around a misunderstanding of the behavior of date-time operations, and changing coding practices as experience is developed.

The result is a long tail of issues discovered in testing towards the end of the migration, which in turn leads to timeline slips and cost overruns, and the worst cases are caused by timeline pressures reducing the testing, leading to bugs remaining in production code which “executes fine,” but does not have the same behavior as the code on the original platform.

Hidden Cost #5: Avoid Stopping the World

Switching over to the new platform is a major milestone that migration project managers strive to achieve. The best outcome that can be hoped for is “things just work.” Of course there is a major risk that problems will arise which stop the core of an enterprise’s data processing.

To avoid stopping the world, the legacy and target platforms can be run in parallel, and the results be confirmed correct. This simplifies testing and increases confidence in the correctness of execution on the new platform. Typically this is completed in stages on segments of the data processing infrastructure. The challenge is to correctly “size the segments” to avoid the project timeline exploding — if the segments are too small then testing cycles take too long. If they are too large then identifying issues takes too long.

If you can address these five hidden costs in code migration then you are on track to a successful migration project.

About the Author:

Shevek is the CTO of Compilerworks

The post Five Hidden Costs in Code Migration: How to Avoid Surprise Expenses appeared first on eWEEK.

]]>