3 Things Candy Crush Can Do To Make Cloud Migration Sweeter By Elaina Arce

wynpnt / Pixabay

Candy Crush is migrating to Google Cloud, marking its first major cloud migration as decided by the online game-maker, King. Starting in early 2019, Candy Crush will be hauling a substantial amount of big data from on-premise to Google Cloud Platform.

A cloud migration is no easy feat, and for a company that provides online gaming to over 270 million people globally, choosing the right cloud provider to navigate the challenges of such a move is crucial. Aside from “even richer online gaming experiences,” Sunil Rayan, managing director of gaming at Google Cloud, makes a good case for why Google was the best choice for Candy Crush:

“It will continue to innovate and demonstrate its leadership position as a global innovator by utilising our big data, AI and machine learning capabilities to give its engineers the next generation of tools to build great experiences.”

But with the potential for better gaming, higher speed, and scalability, a cloud migration also comes with a few big risks. Here are 3 things Candy Crush can do to make their cloud migration sweeter:

1. Don’t rush data transfer

Transferring data from on-premise to the cloud is a huge undertaking, especially for a company that claims to have the largest Hadoop cluster in Europe. Transferring massive amounts of data is not recommended because it slows download speed, so it would be best for Candy Crush to make the move in parts, over time, and with the anticipation of potentially massive transfer costs associated with moving data out of or into a cloud.

2. Prepare for potential downtime

Downtime is a huge risk for any application, let alone a game played by millions across the world. Candy Crush can’t afford for downtime on a game users say is downright addictive, so it’s important to account for inconsistencies in data, examine network connections, and prepare for the real possibility of applications going down during the cloud migration process.

3. Adapt to technologies for the new cloud

Since choosing a cloud provider means committing to a heavy amount of time reconfiguring an application for the move – it’s important evaluate that the technology is the best fit. Technology is a big reason for Candy Crush moving their monolothic, on-premise environment to Google Cloud. Asa Bresin, FVP of technology at King, listed innovations in machine learning, query processing, and speed as drivers for cloud migration, and with technology known for speed and scalability, Google has met their requirements.

Bonus: Keep costs in check. Whether it’s heavy transfer costs, losing money during downtime periods, or the time and manpower needed to reconfigure an application to the cloud – cloud migrations come with costs. The time and costs of a cloud migration are easily misunderstood or drastically understated. For ease and efficiency of keeping costs in check throughout and after the migration process, it’s important to have an understanding of cloud service offerings, pricing models, and the complexity of a cloud adoption budget. Evaluate all of these costs and look into options that will help you save post-migration, like optimization tools.

With a gradual shift, planning for risks of downtime, and the patience and flexibility to reconfigure for Google Cloud, Candy Crush can win at cloud migration.

via Technology & Innovation Articles on Business 2 Community http://bit.ly/2xqJAHO

Advertisements

Searching the Right DDoS Protection Service for Your Online Business By Rahis Saifi

We often find ourselves asking this very question: how long can a website survive once it experiences a DDoS attack? Today, a DDoS attack has entered the 1 Tbps DDoS attack era. February 27th, 2018, Radware experiences a sudden surge in activity on UDP port 11211.

Before the Radware’s ERT Research team and the Threat Research Center reached out to a possible conclusion, other organizations started discussing similar occurrences of a pattern of amplified attacks being developed at the respective port, 11211.

By the time, Radware identified that the exposure is rather large with a Bandwidth Amplification Factor (BAF) ranging in between 10,000x and 52,000x, predictions turned into harsh reality and in the next 24 hours, attackers targeted GitHub which became one of the world’s largest DDoS attack on record.

The attack was so devastating that it peaked at 1.35Tbps which approximated at an average of 126 million packets per second. As a result, GitHub suffered flawed service for the next 10 minutes until the problem got completely resolved.

The attackers got luckily successful with their attempt at one of the world’s largest codebase, and as a result, numerous similar attacks were followed on for several days damaging multiple servers.

After another attack as large as 1.7Tbps, the world unveiled a ground-breaking fact that they have now entered the era of Terabit Denial of Service attacks.

It triggered security professionals to focus and manufacture some better performing technologies which become the perfect defence against the powerfully sophisticated dangerous DDoS attacks.

Top 4 DDoS Protection Services to Watch out While Searching the Right DDoS Protection for Your Business

Hackers around the world are leveraging the power of adaptive learning which has greatly helped them to identify new and innovative ways to bypass sophisticated DDoS protection defence layers.

However, technological firms have not been sitting idle, they have equally worked hard to stabilize technologies that can easily fend off such powerful attacks.

But, when we talk about DDoS protection, one can find a number of platforms that offer DDoS protection on the go. Several platforms promise to provide the right quality of service; however, they deliberately fail at delivering service which meets the required standard.

Therefore, to make sure your server infrastructures are well protected from some of the most potent DDoS attacks on the Internet, we have taken out some time to perform adequate research.

You need to make sure that whichever security provider you employ for DDoS protection, it provides you with the right tools and technologies so you can deal with the threats efficiently. Looking for something that fits perfectly with your needs? These are the five must-have tools and services that your security platform offers in order to mitigate DDoS vulnerabilities.

SSL DDoS Flood Protection

The majority of the Internet traffic is now encrypted. Mozilla Let’s Encrypt project revealed an amazing insight that more than three-quarter of the global websites are now functioning with the HTTPS protocol instead of HTTP. Some of the markets which greatly support HTTPS are USA and Germany.

However, the rise of SSL has opened new doors to overcoming security challenges related to DDoS. An encrypted request consumes server resources 15 times more than a regular user-generated request. With encryption, the vulnerability allows an attacker to easily infiltrate the website with a smartly designed malicious software holding just a small amount of traffic.

Out of 30% of the malicious attacks taking place last year contributed to SSL DDoS Flood attacks. SSL Flood attacks are the kind of DDoS attacks where a number of requests are sent on a website from several host computers at a single instance. These number of requests ultimately floods the server, and the server ultimately results in suffering downtime.

With the increase in the occurrence of the SSL DDoS Flood based attacks, the need to stabilize a higher level of protection against such malicious activities become eminent.

When you are seeking out the perfect DDoS protection solution for your online business, it is highly advised that you seek out a DDoS protection solution which offers commendable protection against SSL DDoS Flood Protection as well.

Zero Day Protection

Modern day hackers are now adept of hacking into systems that work on traditional security mechanisms. They are capable to bypass any security protocol which most security specialists believe are the best option for their online business solutions.

Nowadays, hackers are capable to hack into systems using a renowned DDoS infiltration strategy termed as the burst DDoS attack. What is the burst DDoS attack? It is the type of DDoS attack which infiltrates a system using a short surge of incoming traffic called traffic spikes.

Most of these traffic spikes consume 70-80% of server resources for a short interval of time. Most attacks are designed to auto-dissolve in less than a minute; however, the attack campaigns may go on for hours, days and even weeks at times.

In order to minimize the level of occurrence, security specialists analyse the incoming traffic from these respective sources and as a result, create a digital signature to block the harmful traffic from coming onto the website.

Even then hackers have found a way to manipulate signatures by studying previous vulnerabilities in the system and bypass all security protocols to damage it.

As a result, security specialists are found head deep in recreating manual signatures at a constant pace. This process ultimately becomes a painstakingly labour-intensive task.

Such attacks which leverages the exploitation of a previously exploit security vulnerability are termed as “zero-day” attacks. Your firewall client plays an important role in keeping zero days threat out of your system.

You also need to understand that getting your tasks done with less number of software applications is a smart way of keeping yourself protected. To keep yourself well-secure from threats like zero-days, you need a security platform which fends off such attacks effectively.

Application Layer DDoS Protection

Application Layer (L7) attacks are malicious behaviours which target the top layer of the OSI model. OSI model is where the common internet requests are generated such as HTTP GET and HTTP POST.

In contrast to only affecting the network layer (L3/4) through DNS Amplification, Application Layer DDoS attacks also consume server resources in addition to network resources.

Unlike Network Layer (L3/4) DDoS Attack Protections, Application Layer (L7) DDoS attack protection isn’t dependent on how much network capacity can a service adjust and entertains, but how your security technology can tackle complex attacks smartly by using the right protection vectors.

It profiles incoming traffic and distinguishes between human and bots. It also identifies which web browsers are hijacked and are used to flood your respective system.

Many online security services promise a state-of-the-art DDoS protection; however, they incredibly fail at delivering one through their WAF. Most DDoS protection services which provide robust protection mechanisms against Application Layer (L7) DDoS attacks come with pricey add-on WAF services which are given separately.

Having something which is optimized and requires less effort to integrate is a better option rather than something which you have to go through, manually.

Behavioural Pattern Protection

In time, DDoS attacks are becoming more refined. This is incredibly becoming difficult to identify whether the traffic visiting your website is a hoax one or a legitimate one. Behavioural issues are more likely observed in the application layer (L7) in an OSI (Open System Interconnection) Model.

Most security specialists don’t have a solution which can resolve sophisticated DDoS attacks. Therefore, in order to stop such attacks from taking place, they use rate-limitation caps as traffic volume thresholds.

However, this approach is a very primitive approach as it does not identify whether an incoming traffic to a website is a legitimate one or not. It rather becomes difficult for security specialists to sieve through the right incoming audience on website platforms.

For example, if you own an e-commerce store and you launch a special discount on holiday seasons, it becomes hard for specialists to identify which of the audience is malicious and which of them is legitimate.

By deploying unsophisticated methods such as rate-limiting mechanisms will only save your website from getting extra traffic. But, here’s the downside, what if the traffic you are trying to keep away from your online store results in holding back some potential paying customers.

However, multiple DDoS protection technologies are now introducing behavioural analysis capabilities. Technologies specially Tcaps Cloud are designed in such a way that it captures normal user behaviour and based on data collection, it confirms whether an incoming traffic to a website is coming from a legitimate user or a hoax one in the future.

Not only do such technologies provide a stronger level of protection, but it also results in creating fewer false positives and will not block any potential incoming website traffic.

Are you looking for DDoS attack protection service which builds a total protection layer around your digital product so you never have to worry about unwanted traffic surges? There are several other attributes that one can discuss when it comes to security; however, some of the above stated are the four must-have DDoS Protection attributes that your security service provider should offer to you as a customer.

via Technology & Innovation Articles on Business 2 Community http://bit.ly/2xcyCWP

Software Asset Management in a Time of SaaS By Luis Ward

Pexels / Pixabay

Not to brag, but we’ve been writing about Software Asset Management for a long time at SoftwareONE. Which makes sense because as we wrote about in “Implementing a Software Asset Management Plan,” employees across the globe have become more reliant on software for communication, organization and automation of daily operations – rendering SAM a business critical process.

However, one area we haven’t touched on as thoroughly is how SAM – both the process and the tools – has evolved, and continues to evolve, as more organizations turn from on-premises to the cloud, or Software as a Service (SaaS) applications.

Gartner reported in April 2018 that SaaS continues to be the largest segment of the overall cloud market (including BPaaS, IaaS and PaaS) and is expected to see revenues increase to $73.6 billion by the end of 2018 and constitute 45 percent of overall application software spending by 2021. According to Cisco’s Global Cloud Index for the period 2013 – 2018,59% of all cloud workflows will be delivered by the end of this year.

Further solidifying the importance of SaaS in the world of SAM is the fact that a major vendor – Flexera and ServiceNowrecently purchased Meta SaaS and VendorHawk respectively to more effectively monitor SaaS spend management. Other SAM tool providers are also acquiring SaaS solutions to bolster their existing SAM tool solution. As more companies realize a hybrid approach with their software estate, needing to manage both on-premises increasingly more SaaS applications, it is imperative that SAM processes and tools keep up in order to better manage overall cloud spend.

Nine key areas

There are 9 key areas to think about when implementing your SAM plan:

1. Spend and added costs: SaaS costs a lot. Any organization using SaaS solutions sees how quickly it grows. This is because SaaS is intentionally engineered to make it easy for employees to sign-up and invite other employees to use software without the intervention of the IT department, leading to IT procurement not having the internal financial controls over SaaS. Without an effective SaaS software asset management process, spending can quickly spiral out of control.

Published pricing may appear to be of good value, but extra fees can add up quickly. Common additional costs include extra users, customizations, integrations, third-party services, training, and set-up fees. Work with your sales rep early in the process to understand what additional charges might apply to your account. By far the best way to keep the additional costs down is to avoid customizations to functionality and integration with other systems. Also, negotiate a set rate for incremental growth as the project grows.

2. Compliance and security risks: License compliance is very different from packaged software, and it’s naïve to think that buying a SaaS solution means that there’s no longer a compliance problem. SaaS is simply replacing compliance risk with spend management risk.

If you are non-compliant with on-prem software, you waste money if audited and risk large penalties. With SaaS, you waste money if you’re not proactively managing your users or subscription levels. An example can help illustrate this: Take Adobe Creative Cloud. Do users only need 2 – 3 applications in the Creative Cloud catalog? If so, maybe it’s best to purchase a Single App version of Creative Cloud versus a Creative Cloud All Apps plan. This is the value a good software asset management consultation can give you.

Snow shares the same view with industry experts such as Gartner in their paper SAM Reaches a Tipping Point: SaaS Cost Management Eclipses License Compliance.

“IT sourcing and vendor management leaders need to recognize that SaaS subscriptions are not a turnkey fix to licensing complexity, but will increase cost risks and add to the demands on SAM.”

(Source: Gartner, Software Asset Management Reaches a Tipping Point: SaaS Cost Management Eclipses License Compliance)

3. Length of Term: If the vendor wants a long-term subscription, we recommend that you start with the shortest – probably one or two years. If you do agree to a longer term of three to five years, make sure you have an out clause. Typically this would provide a window of opportunity to break the contract during a specific time window. For example, it might allow you to walk after one month of using the system but before 90 days. Another example might be the ability to break the contract if certain levels of service are not provided consistently.

4. Service Level Agreements (SLAs): The SLA is the vendor’s commitment to keeping the system up and running. It is typically expressed as a percentage of “up time.” You will almost always see the SLA represented as 99.9% or thereabouts. However, there is wide variation in how that number is calculated.

5. Renewals: Given that the renewal process provides an important exit opportunity from a bad contract, as well as an opportunity to re-negotiate, make sure you are still in control when the renewal date comes around. Watch for an “evergreen” renewal. An evergreen automatically renews your term, usually 30 days prior to expiration. If you spot an evergreen renewal, ask to remove it. When a company refuses to remove the clause, this is a red flag.

6. Backups and recovery: If you input valuable data every day, then you will want to ensure the provider performs a backup each day. Others might back up throughout the day. The way the backups are performed is also important. Some vendors maintain numerous backups, while others maintain only one and overwrite the previous backup. Creating separate entries allows you to rollback to a prior date if necessary. This takes up a lot of space so you will probably have to ask for it specifically. The final consideration with backups is whether the data is backed up in a separate data center. Keeping it at a separate center will add a buffer against data loss in the event of a data center disaster.

7. Data export: Finally you will want to include a clause about data export. Two things are key here: you should always retain ownership of your data and you should know how to get it back. This will be most important in two scenarios:

  • If you want to migrate to a new system because you are unsatisfied
  • The vendor goes out of business and you need access to your data even before you select a new system.

The method for getting your data back will vary, but common methods include a XML, CSV, and HTML. For the very technical, a SQL export may be better.

8. Shadow IT: Shadow IT refers to technology that has been procured outside of official organizational channels and isn’t managed by the IT team. In a typical SAM plan with on-premises software you have checks and balances in place to ensure that any software purchase goes through specific procurement and approval processes.

With SaaS this isn’t always the case. Employees can simply use the company credit card to buy what they want, when they want it. This can cause serious compliance, data integrity and cost issues, as well as compromising what may already be in place on-premises.

It is vital to ensure that the SAM culture at your organization encompasses checks and balances for SaaS based applications as well, and that the SAM tool you are using can evaluate SaaS usage.

9. Total Cost of Ownership: On-premises license structure tends to be more straightforward than SaaS, as they are typically dependent on number of users and not consumption.

When implementing a SAM process and tool for SaaS, it needs to cover off on shorter upgrade cycles, how the subscription model actually works, and service renewal costs to ensure you have full visibility into what the SaaS model is costing your company.

Many organizations deploy SaaS based applications and have zero visibility into the actual cost of those applications until their cloud budget is entirely out of proportion – SAM can help reign in those costs and make sure your budget stays aligned.

SAM is for SaaS too!

SaaS based applications are only increasing throughout the business environment and it’simportant to realize that there are differences between how SAM works on-premises and in the cloud.

Good software asset management will cover the following:

  • Discovery: revealing who is using what subscription and which subscriptions are known and unknown.
  • Cost optimisation: Cut SaaS costs, manage license renewals and forecast spend. Take a look at how Pyracloud does this.
  • Monitoring and alert: covering activity – such as who is doing what and security – alerts for risky behavior and suspicious permissions granted to third party apps

A comprehensive SAM plan and tool will cover off on both and ensure you have full visibility and control of your assets, and the costs of said assets across the entire software estate.

If you’re just getting started down the SAM “path” and you’re also a business increasingly driven by SaaS solutions, please visit here to learn more about our SAMSimple offering to better understanding how SAM can result in you quickly realizing the value of your SaaS investment, reign in rogue SaaS spend and reduce compliance risk.

via Technology & Innovation Articles on Business 2 Community http://bit.ly/2NgCuAg

Combatting the Cybersecurity Skills Gap with Managed Security Services By Bali Kuchipudi

Today, organizations face a perfect storm when it comes to cloud security. As organizations embrace digital transformation, adding new solutions and devices to the stack, cybercriminals are launching sophisticated attacks aimed at leveraging these new solutions as entryways to sensitive data. To minimize the impact of these attacks, regulatory bodies have issued a myriad of new compliance standards, such as General Data Protection Regulation (GDPR) Act, which result in major penalties if neglected. Securing the cloud has never been more important or challenging.

The Cybersecurity Skills Gap

Unfortunately, this challenge is further compounded by the cybersecurity skills gap. Cybersecurity professionals are in huge demand as organizations adopt digital strategies, yet there are very few professionals who actually have the necessary hands-on security experience organizations seek.

Currently, there are an estimated 350,000 vacant cybersecurity positions in the US, which remain vacant. This trend extends beyond US borders, with estimates showing 3.5 million unfilled security jobs globally by 2021.

Unable to outfit teams with professionals to maintain security infrastructure, organizations are at a heightened risk of data breach and noncompliance.

Challenges Securing the Cloud Due to the Skills Gap

While security concerns hindered cloud adoption for many years, organizations have come to understand that cloud can actually offer enhanced security due to the shared responsibility model. As we talked about in Part 1 of these series, the shared responsibility model divides security maintenance and responsibilities between the subscribing organizations and the cloud service provider. This model has been adopted widely by top public cloud providers, including AWS and Azure.

The general rule is that the cloud service provider is responsible for security of the cloud, while the organization is responsible for securing what and who goes into the cloud. More specifically, the cloud service provider is largely responsible for physical infrastructure security, host infrastructure, and computing, networking, and storage software. This is security of the cloud.

The customer is responsible for security in the cloud. This constitutes access management, endpoint protection, application security, firewall configuration, encryption, and data integrity. Organizations are responsible for deploying the necessary solutions and processes to protect what they store within the cloud.

This shared responsibility model can make the cloud a secure option for organizations, however, only if they have a team with cloud security knowhow to deploy data protection solutions, access management policies and tools, and monitor cloud activity for suspicious data movement that might indicate a threat.

However, a recent survey notes that 29 percent of organizations face a shortage of cloud computing security skills within their personnel.

Using Managed Security Services to Combat the Skills Gap

To secure the cloud, organizations need an experienced team. However, these teams are becoming increasingly difficult to outfit as the skills gap persists. This is especially true because as security professionals become harder to find, many organizations are priced out of the hiring race due to increasingly competitive salary offerings.

This is why organizations should utilize managed security services. These offerings combat the challenges posed by the skills gap by equipping organizations with a skilled security team that is familiar with policies of major cloud service providers, understands security and compliance requirements and the tools that help meet them, and can provide constant monitoring.

Among the top benefits of managed security services are:

  • Familiarity: A key benefit of managed security services in the cloud is the team’s familiarity with both the policies of cloud service providers and with the security solutions best suited for each provider. This allows them to give guidance on what exactly falls to the organization in the shared responsibility model, and which tools they should implement to meet that responsibility. From there, this team is able to deploy, monitor, and troubleshoot these solutions moving forward.
  • Compliance Reports: Managed Service Providers (MSPs) are also aware of regulatory standards and the controls that must be in place to maintain compliance. A team of security MSPs can ensure that security solutions and processes are updated when regulations are, to avoid penalties.
  • Monitoring: Successful cloud security requires team members who can constantly monitor the network for anomalous behavior to detect risks and attacks before they can spread. With MSPs, organizations can count on monitoring across the environment at all times. MSPs can augment your security team and help your organization investigate suspicious acitivities.

Final Thoughts

The cybersecurity skills gap is affecting organizations of all sizes. However, it is not an excuse for an insecure cloud. With regulations growing increasingly strict and attacks more sophisticated, organizations should look to managed security services to provide ongoing expertise and support.

via Technology & Innovation Articles on Business 2 Community http://bit.ly/2NJ8I6B

What are Key/Value Pairs, and How to Use Them in Your App Marketing By Naike Romain

Key/value pairs are a set of linked items: a unique identifier and a value. These aren’t dissimilar to how you might think about the contents of a dictionary – each word represents a unique identifier, or key, and the value is that word’s definition. When used in your app, key/value pairs allow you to populate content, create custom deep links, and much, much more.

To take advantage of key/value pairs in your app marketing, you and your engineering team will need to do a bit of planning. That’s because key/value pairs only work if yourapp understands them. These identifiers and definitions need to be built in to the code of your app so that it can recognize the keys and know how to respond with the right values.

If you’re looking to enhance your messaging campaigns and app experience, using key/value pairs can unlock functionality beyond what you can do with the Localytics Dashboard. It might be hard to know how to take advantage of key/value pairs in the abstract – the possibilities are almost endless! That’s why we recommend coming up with campaign ideas, then working backwards to understand what your app will need to accomplish. Once you have that figured out, you can work with your engineering team to build in the key/value pairs that enable your marketing campaign ideas.

We’re going to dive into some of the ways that you can use key/value pairs to personalize your messages and create unique app experiences.

Messaging

Key/value pairs are a very valuable addition to your mobile messaging toolkit. They’re one of the ways you can personalize messages to individual users or dynamically control how they experience the app.

TIP: key/value pairs are not noticeable to the user receiving the message but instead are delivered to the app and cause the app’s code to perform some type of action.

Push notifications

The most common use case for key/value pairs is for personalizing your messages. You can build a library of images that correlate to specific preferences, like favorite category for retail apps or genre for media and entertainment apps. When building out your push messages you can incorporate images that correspond with the individuals favorite categories automatically.

TIP: Keep in mind that you can also use liquid templating to personalize messages by dynamically inserting values into the message content eg. “Hey “, order now for free shipping on your “ items!”

key/values can deliver data you can use for a more complex rich push experience. For example, on iOS, you can send live streaming video or show the current status of a cab ride or delivery order. To do this, your key/value pair needs to connect a user or order ID to a content extension that will serve the content.

The push message you draft in Localytics should also include key/value pairs that allow for the display to be changed by using the property mutable content. Once the message is received by your app, it will use the key/value pairs to communicate with a service or content extension. The service extension can pull in static content, like message copy, pictures or video where the Content extension allows you to pull in live streaming content like a delivery map.

Key Value Pairs and Extensions

For retail apps, push messages are a great vehicle for delivering deals to app users. In order to keep the shopping experience consistent between mobile and web, it is necessary to sync discounts or offers (eg free shipping) with your PoS system. To do this, send your push message with a key/value pair to sync the offers available to the end user via your app to your PoS system. This ensures that your user should have access to that offer no matter where they check out, unifying the mobile app experience with the web.

In-app Messages

You can also use key/value pairs to alter the design elements of your in-app messages. Your app may be configured to change the dimension of the message window, hide the close button, or change the layout of your in-app messages by including a key/value pair with the message. With help from your dev team, you can design completely custom in-app messages that drive action and engagement in your app.

Another really common use case for key/value pairs is for deep linking. When paired with a CTA in your in-app message, you can use the key/value pair to drive users into a specific screen deeper in your app. For example, if you need a user to update their billing information you can drive them to the Account screen by linking to it with the key/value pair.

TIP: Deep links aren’t limited to in-app messages. They can be used with push notifications as well, as long as your app has been set up to handle them.

Inbox Messages

Using the Inbox tool, you can create A/B test to try out brand new app experiences and learn about how different layouts and colors impact your users behavior. When creating an Inbox campaign, you’re able to pass key/value pairs that can be configured to instruct the app on which version of the app layout to display. You can change button colors, copy, or any other design elements in your app experience.

With silent inbox campaigns you can change design elements in your app without the user seeing an actual message. This makes it really easy to update layouts or refresh colors in order to see what works best.

Mobile marketers are constantly working to optimize and personalize app experiences in order to drive engagement. With key/value pairs, they’re able to tackle more complex tests and design more advanced message journeys that delight users and move the needle.

via Technology & Innovation Articles on Business 2 Community http://bit.ly/2xiLw57

Is There a Talent Gap in Cybersecurity? By Jim Barnish Jr.

Current challenges in the cybersecurity industry have less to do with technological limitations and more to do with a shortfall in human capital. Cybersecurity firms are staring at an encroaching job shortage across the board. The continued growth of the industry at large has not been met with an increase in the number of skilled developers, and firms need to take a closer look at their internal hiring protocol before this problem becomes acute.

In order for cybersecurity firms to scale up their operations to meet increasing demand, they must adapt their workforce strategy to match the impressive growth in the software sector.

What does the cybersecurity talent gap look like?

The talent gap in cybersecurity refers to the apparent lack of skilled developers available for important positions at companies and firms. Consider research done by Frost and Sullivan projecting that by 2020 there will be 1.5 million unfilled positions in the global cybersecurity workforce. That is a major shortfall, and brings to bear issues in education, training, hiring strategy, and scope of work required.

Part of the issue is that, on average, companies are not even looking for the most skilled positions. A Cybersecurity market review conducted by Momentum found that 26% of nationwide job postings were geared towards ‘operating and maintaining’ existing systems. Combine that with 24% for ‘securely position’ jobs (building the security infrastructure itself), and that brings us to 50% without consideration for ‘risk management/analyst’ positions. Contrast that with only 16% of job postings for ‘protection and defense’ positions, and the talent gap becomes more clearly defined.

Either there are not enough skilled developers on the market, or the companies themselves are naïve to the issues and believe they can get away with hiring builders and administrators instead of cybersecurity managers and vulnerability analysts. It’s likely that both forces are contributing to the gap in hiring.

Industry Growth Makes the Talent Gap More Acute

This is all happening in an industry that will not sit still. The IDC indicates that revenue for cybersecurity firms will grow from $73.7 billion in 2016 to over $101 billion by 2020. This annual growth rate of 8.3% is more than double the rate of overall spending growth in the IT sector. Clearly companies are in the process of scaling up their cybersecurity departments, and that means they will be looking to hire.

The issue is: how do firms adapt to prevent 1.5 million unfilled positions by 2020?

Firms Need to Create Job Ecosystems and Hire Outsiders

The most proactive approach is to change the existing hiring protocol. Cybersecurity is an industry that certainly requires a healthy dose of technical skill – but that is not the most important intangible at play when it comes to hiring. The most important intangibles are curiosity, excellent problem solving ability, and an understanding of risk potential. People with these aptitudes and an impressive background in another industry should be given more of an opportunity when put up against a developer with a 4-year college diploma.

Companies should put more resources into on-the-job training and mentorship for those who might not have the experience but certainly have the passion to learn. Not only will the job shortage risk be mitigated, but companies will develop a team of experts that know their system inside and out.

Bringing it all Together

At current growth rates, the cybersecurity industry of 2020 will be hit with a severe job shortage. For companies looking to expand their cybersecurity departments, the most proactive solution is to strengthen on-the-job training programs so that newcomers who might not be experts in the field are given the resources to become so.

via Technology & Innovation Articles on Business 2 Community http://bit.ly/2NIilTa

Google Needs To Make Machine Learning Their Growth Fuel By Louis Columbus

  • In 2017 Google outspent Microsoft, Apple, and Facebook on R&D spending with the majority being on AI and machine learning.
  • Google needs new AI- and machine learning-driven businesses that have lower Total Acquisition Costs (TAC) to offset the rising acquisition costs of their ad and search businesses.
  • One of the company’s initial forays into AI and machine learning was its $600M acquisition of AI startup DeepMind in January 2014.
  • Google has launched two funds dedicated solely to AI: Gradient Ventures and the Google Assistant Investment Program, both of which are accepting pitches from AI and machine learning startups today.
  • On its Q4’17 earnings call, the company announced that its cloud business is now bringing in $1B per quarter. The number of cloud deals worth $1M+ that Google has sold more than tripled between 2016 and 2017.
  • Google’s M&A strategy is concentrating on strengthening their cloud business to better compete against Amazon AWS and Microsoft Azure.

These and many other fascinating insights are from CB Insight’s report, Google Strategy Teardown (PDF, 49 pp., opt-in). The report explores how Alphabet, Google’s parent company is relying on Artificial Intelligence (AI) and machine learning to capture new streams of revenue in enterprise cloud computing and services. Also, the report looks at how Alphabet can combine search, AI, and machine learning to revolutionize logistics, healthcare, and transportation. It’s a thorough teardown of Google’s potential acquisitions, strategic investments, and partnerships needed to maintain search dominance while driving revenue from new markets.

Key takeaways from the report include the following:

  • Google needs new AI- and machine learning-driven businesses that have lower Total Acquisition Costs (TAC) to offset the rising acquisition costs of their ad and search businesses. CB Insights found Google is experiencing rising TAC in their core ad and search businesses. With the strategic shift to mobile, Google will see TAC escalate even further. Their greatest potential for growth is infusing greater contextual intelligence and knowledge across the entire series of companies that comprise Alphabet, shown in the graphic below.

  • Google has launched two funds dedicated solely to AI: Gradient Ventures and the Google Assistant Investment Program, both of which are accepting pitches from AI and machine learning startups today. Gradient Ventures is an ROI fund focused on supporting the most talented founders building AI-powered companies. Former tech founders are leading Gradient Ventures, assisting in turning ideas into companies. Gradient Venture’s portfolio is shown below:

  • In 2017 Google outspent Microsoft, Apple, and Facebook on R&D spending with the majority being on AI and machine learning. Amazon dominates R&D spending across the top five tech companies investments in R&D in 2017 with $22.6B. Facebook leads in percent of total sales invested in R&D with 19.1%.

  • Google AI led the development of Google’s highly popular open source machine software library and framework Tensor Flow and is home to the Google Brain team. Google’s approach to primary research in the fields of AI, machine learning, and deep learning is leading to a prolific amount of research being produced and published. Here’s the search engine for their publication database, which includes many fascinating studies for review. Part of Google Brain’s role is to work with other Alphabet subsidiaries to support and lead their AI and machine learning product initiatives. An example of this CB Insights mentions in the report is how Google Brain collaborated with autonomous driving division Waymo, where it has helped apply deep neural nets to vehicles’ pedestrian detection The team has also been successful in increasing the number of AI and machine learning patents, as CB Insight’s analysis below shows:

  • Mentions of AI and machine learning are soaring on Google quarterly earnings calls, signaling senior management’s prioritizing these areas as growth fuel. CB Insights has an Insights Trends tool that is designed to analyze unstructured text and find linguistics-based associations, models and statistical insights from them. Analyzing Google earnings calls transcripts found AI and machine learning mentions are soaring during the last call.

  • Google’s M&A strategy is concentrating on strengthening their cloud business to better compete against Amazon AWS and Microsoft Azure. Google acquired Xively in Q1 of this year followed by Cask Data and Velostrata in Q2. Google needs to continue acquiring cloud-based companies who can accelerate more customer wins in the enterprise and mid-tier, two areas Amazon AWS and Microsoft Azure have strong momentum today.

via Technology & Innovation Articles on Business 2 Community http://bit.ly/2p6jeqD