How to Be a Savvy AI Buyer By Theresa Carper

In the past three or four years, our concept of what’s “possible” for AI has shifted. Innovations we thought to be Hollywood science fiction have become, well, reality. For instance, Google recently released an API that can read handwriting within images, an unsolvable problem four years ago.

The market has matured, too. We’ve seen the rise of AI in specialized business applications like detecting credit card fraud, identifying smoking-gun documents in lawsuits, categorizing business contracts, and tuning supply chains. Tasks that previously would have taken teams of specialists weeks or months of work, AI now completes in minutes or seconds.

All of a sudden AI is everywhere. It’s a part of our daily lives that we already take for granted, like the facial recognition that unlocks your phone, or the voice commands that trigger digital assistants like Siri and Alexa. No matter the user, business line, or industry, there’s an AI vendor for you.

But the commoditization of AI has created a new problem. When academics talk about AI, they’re usually referring to “deep learning,” an advanced form of neural network shaped by big data. In the software industry, however, it’s an open secret that “AI” by itself means nothing.

Marketers eager to cash in on AI mania have applied the term to everything from Amazon-style recommendation systems (“If you liked this, you might also like this”) to rule-based products that are little more than an extended series of if/then decision trees, the kind novice coders learn in their first weeks of class.

Associating deep learning with such basic technology has made it difficult for AI buyers to know what they’re actually getting and what the solution can actually do. Even with a product demo, cutting through the marketing noise to see what’s under the hood can be downright confusing.

Since the market has gotten much savvier, you as a buyer should get savvy, too. You need to be diligent, and you need to be on your guard. You can go a long way by keeping these three principles in mind.

  1. Know what you want: What do you need AI to do that your current software can’t? What key business outcomes are you looking for? Don’t invest in AI with vague objectives, out of fear of falling behind. Focus on the outcome you want and then back into how to get there. Decide exactly what you want AI to do before you buy: come up with specific requirements and have vendors prove they can meet them. Defining your success criteria before evaluating vendors not only levels the playing field, it also reduces scope creep and gives your team a common selection framework.
  2. Trust, but verify: Once you’ve decided what you want AI to do, make vendors demo their products in front of you. Literally right in front of you. Using your own data. The goal is to figure out how much their software relies on manual training, and how much on automation. To that end, have vendors analyze data you give them on the spot, sight unseen. If they can’t, it’s not necessarily a deal breaker. It doesn’t mean the product isn’t “true AI.” But it is a yellow flag. At minimum, it suggests the AI will require frequent tuning. As a rule, the more an AI can do on the spot, the less maintenance it’s likely to need and the more flexible it’s likely to be.
  3. Watch out for consultants: The point of AI is automation, so take note of any mention of consulting firms or professional services teams. Some AI frameworks need developers to get up and running. But many so-called AIs are actually rule-based systems that need constant attention. One Fortune 500 company I know of spent $1 million on a system that claimed to have added AI, then had to spend 10 times that on professional services teams to build the product. If a system needs regular updates from large teams of consultants, it’s a strong signal you’re not dealing with AI in any meaningful sense.

Conclusion

In my experience, if you scratch someone who has had a bad AI experience, you’ll find someone who was sold a bill of goods—either basic technology posing as something else, or an unrealistic vision of a sci-fi supermachine.

My advice? Be practical. There’s no room for “maybe,” or “possibly,” especially not in times like these. Focus on what you need. Buy only what you’ve seen work. And leave the fantasy to Hollywood.

For guidance on understanding true AI in contract management software, read my company’s enterprise buyer’s guide.

via Technology & Innovation Articles on Business 2 Community https://bit.ly/363zvmS

Smishing: the New Phishing By Ty Mezquita

smishing threat phishing

Many people know about Phishing, a form of social engineering to deceive individuals into doing a hacker’s bidding. Hackers convince users to click on malicious links in an email resulting in malicious file downloads or redirection to credential-stealing websites. Smishing is a lesser-known form of phishing that targets smartphone users via text or SMS messages. When it’s successful, smishing tricks the recipient into taking some action. Like a phishing attack, it could be visiting a fraudulent site and giving up your credentials or downloading a rogue application that can compromise your phone or steal personal information. Simply put, smishing is phishing through text messages.

Why Smishing, Not Phishing?

Hackers are continually finding new ways to attain user’s data. Hackers are using smishing because people tend to be more inclined to trust a text message than an email. Most people are aware of the security risks involved with clicking on links in emails, this is less true when it comes to text messages. A study presented by TechTalk showed that 98% of common text messages are read and 20% are responded to, where 45% of common emails are read and only 6% are responded to. It should come as no surprise that hackers are turning to smishing attacks with increasing frequency.

Smishing Recommendations

In general, you don’t want to reply to text messages from people you don’t know. That’s the best way to remain safe. This is especially true when the text comes from a phone number that doesn’t look like a phone number, such as “5000”, or “452-981” number. This is a sign that the text message is actually just an email sent to a phone. You should exercise basic precautions when using your phone such as:

  • Don’t click on links you get on your phone unless you know the person they’re coming from. Even if you get a text message with a link from a friend, consider verifying they meant to send the link before clicking on it.
  • A full-service Internet security suite isn’t just for laptops and desktops. It also makes sense for your mobile phone. A VPN such as ‘Norton Secure VPN’ is also an advisable option for your mobile devices. This will secure and encrypt any communication taking place between your mobile and the Internet on the other end.
  • Never install apps from text messages. Any apps you install on your device should come straight from the official app store. These programs have vigorous testing procedures to go through before they’re allowed in the marketplace. If you have any doubts about the safety of a text message, don’t even open it.
  • If you receive a text message mentioning you should update settings or unsubscribe to a service that you haven’t signed up for, ignore the message. If you see any unauthorized charges on your credit card or debit card statement, take it up with your bank or credit card. They’ll be on your side.

Almost all of the text messages you get are going to be totally fine. But it only takes one bad one to compromise your data and security. With just a little bit of common sense and caution, you can make sure that you don’t become a victim of smishing.

Further Cybersecurity Recommendations

Not only should you follow the previous Smishing security recommendations, but there are other ways CyberHoot recommends to help stay secure in your day to day lives online:

  • Train employees on cybersecurity basics, helping them become more aware of the threats they face when interacting online. (Phishing, Smishing, Social Engineering)
  • Periodically Phish Test Employees (at least annually, but preferably quarterly or monthly)
  • Be wary of public, unsecured WiFi (use a VPN if dealing with sensitive information)
  • Guide employees with cybersecurity policies, following NIST Guidelines (WISP, Acceptable Use, Password Policy, etc.)
  • Employ a Password Manager, require it in your Password Policy, demand strong password hygiene in your employees and business
  • Enable Two-Factor Authentication wherever possible and especially on all Internet-facing services you use (O365, Salesforce, Finance apps. etc.)
  • Work with your IT staff or third-party vendors to ensure your critical data is being encrypted at rest and in transit (ensure keys are strong and passwords long)
  • Regularly back up critical data following the 3-2-1 methodology
  • Use the principle of least privilege
  • Patch your systems regularly and triage critical vulnerabilities using a repeatable process with established timelines based upon threat levels
  • Stay current with the always-changing cyber threats
  • Consider hiring a virtual Chief Information Security Officer (vCISO)

By implementing these measures at your business you’ll become more aware and more secure. You can take comfort knowing your company is prepared for these attacks.

To learn more about Smishing, watch this short video:

via Technology & Innovation Articles on Business 2 Community https://bit.ly/3kNjfKO

The Best Data Wins: Strategies to Overcome Web Data Collection Challenges By Julius Cerniauskas

It should go without saying that data is crucial to a business strategy, especially in today’s economic landscape dominated by issues concerning competition, process efficiency and decreased consumer demand.

Data is the essential tool that can provide solutions to all these issues, and its collection and analysis is fundamental to the success of all businesses. Like many things in life however, quality is more important than quantity, and I believe that quality data is worth its weight in gold.

I’ve overseen hundreds of global businesses from various sectors over the years at Oxylabs, and have noticed some patterns. In this short article, I’m going to share some insights for overcoming data collection challenges so that businesses can get the data needed to meet and surpass their goals.

Web Scraping: The Quest for Quality Data

Web scraping, for those who may not know, is the process of collecting data from a website using applications that scan and extract data from its pages.

The internet is full of publicly available data ready to be collected and analyzed. Web scraping is the process of gathering that data and then analyzing it for patterns and insights useful to meeting the strategic goals of a business.

Web scraping, like a lot of things, is easier said than done. If the internet is like a mine, then an effective web scraping strategy will ensure we get the “gems” of data required to make a real difference in the success of a business strategy.

Overcoming Web Scraping Challenges

The bigger an object, the more complex it can become. Web scraping is no exception. As projects scale up, the complexity increases due to increased volume, additional data sources, and issues with geographical location.

Here are four of the most common challenges I have come across, along with some solutions:

1. IP Blocking

Since the internet is a digital treasure trove of publicly available data, millions of scraping applications continuously navigate the web gathering information. This often compromises the speed and functionality of websites. Servers address this issue by blocking IP addresses making multiple simultaneous information requests, stopping the scraping process in its tracks.

Solution:

Servers can easily detect “bots” or scrapers making multiple requests, so the solution to this challenge requires the use of proxies that mimic “human” behaviour.

Data center and residential proxies can act as intermediaries between the web scraping tool and the target website. Either choice depends on the complexity of the website, and in both cases the proxies mimic the effect of hundreds or thousands of users making requests for information. Due to the number of proxies in use, limits are rarely exceeded and IP blocks by the server are not triggered.

2. Complex/Changing Website Structure

Web scraping applications scan the HTML of a website in order to download the information required. Since developers all use different structures and coding, this creates a different challenge for scrapers looking to download content from different sites.

Solution:

There is no “one size fits all” solution when it comes to web scraping because each website is different. This challenge can be addressed in two ways:

(1) Coordinate web scraping efforts in-house between developers and system administrators to adjust to changing website layouts, dealing with complexities in real time; or

(2) Outsource web scraping activities to a third-party highly-customisable web scraping tool that will take care of the data-gathering challenges so company resources can be diverted to analysis and strategy planning.

Each solution has its pros and cons, however it’s always helpful to remember that scraping the data is only the first step. The real benefits come from organizing, analyzing, and applying the data to the needs of your business.

3. Extracting Data in Real Time

Web scraping is essential for price comparison websites such as those that compare travel products and consumer goods because the content on these sites is the product of web scraping activities that extract information from multiple sources.

Prices can sometimes change on a minute-by-minute basis and in order to stay competitive, businesses must stay on top of current prices. Failure to do so may result in losing sales to competitors and incurring losses.

Solution:

Extracting data in real time requires powerful tools that can scrape data at minimum time intervals so the information is always current. When it comes to large amounts of data, this can be very challenging, requiring the use of multiple proxy solutions so the data requests look organic.

Due to the growing number of requests, every operation increases in complexity as it scales up. A successful collaboration with data extraction experts ensures that all the requirements are met so the operation is executed flawlessly.

4. Data Aggregation and Organization

Scraping data can be thought of as research. Effective research techniques make all the difference in collecting the most relevant data.

Recall the research projects from our school days. They required much more than just going to the library and grabbing a stack of random books. The right books were required, and the information in those books needed to be extracted and organized so it could be efficiently used in our projects.

The same can be said for web scraping. Just extracting the data is not enough – it must also be aggregated and organized according to the research goals of the business.

Solution:

The solution that saves time and money for this challenge requires expert consultation. Experienced data analysts understand where to find the right data and how to effectively collect it.

As I mentioned earlier, quality overcomes quantity. Extracting the data is not enough, it must be strategically sourced, optimally extracted, expertly organized and analyzed for patterns and insights. An expert workflow of this nature leads to better, more accurate and precise data, leading to expert decision-making and successful strategy execution.

A Final Word

Web scraping is a valuable yet complex tool that is absolutely essential for excelling in today’s competitive business landscape.

Over the years I have seen many challenges and believe there is always a solution to any problem so long as there is a willingness to provide support and adapt to constant change.

Data is ultimately a powerful problem solver for many issues that can empower businesses into making the most accurate decisions. By overcoming challenges, businesses can move forward and grow, adding value to their operations and to society overall.

via Technology & Innovation Articles on Business 2 Community https://bit.ly/3kJx50R

State & Local Government Cyberattacks Up 50% By Ty Mezquita

government cybersecurity attacks

Cybersecurity firm BlueVoyant published a report on August 27, 2020, finding that State and Local Governments have seen a 50% increase in cyberattacks since 2017. The report outlined the cyberattacks as either targeted intrusions, fraud, or damage caused by hackers. BlueVoyant noted that the 50% increase in attacks is likely a fraction of the true number of incidents because many go unreported.

Why?

The main weakness with State and Local Governments is the general lack of a basic security program to educate and govern users while also lacking key technology protections for their networks and endpoints. Additionally, government entities are purchasing cyber insurance as standard operating procedure. Hackers recognize this and target them knowing that cyber insurance will pay out a ransomware demand.

The study validated BlueVoyant’s position that active threat targeting happens across the board:

“For every selected county’s online footprint, evidence showed some sign of intentional targeting,” What’s more, five counties — or 17% of the 28 studied — showed signs of potential compromise, indicating that traffic from government assets was reaching out to malicious networks. There’s a collective risk here because there is no standardization [around security controls]. You have certain state and local [governments] that are on dot-coms and dot-us or dot-orgs. One would think that these should be on the dot-gov domain because [that] means that you not only check the box as being a certified government site, but you get forced two-factor authentication and you’re always going to have HTTPS.”

Austin Berglas, Head of Ransomware/Incident Response at BlueVoyant

Ransomware

The main method these agencies are attacked is through Ransomware. Ransomware has grown exponentially in recent years, with government entities being attacked weekly. What’s also concerning is the increase in hacker’s extortion demands. Three years ago, the average ransomware demand was $30,000. In 2020, it grew to nearly half a million dollars. Even when municipalities don’t pay, the breach recovery costs can be enormous. The City of Baltimore spent more than $18 million on damages and remediation in a 2019 ransomware attack.

The risk with small governments is similar to the risk with SMBs; they assume they are not at risk due to the size of their organization. What all these entities don’t realize is that hackers target them because they lack proper cybersecurity programs.

Phishing

The other primary attack vector used by hackers on government employees is Phishing. Phishing is a form of social engineering to deceive individuals into doing the hacker’s bidding. Hackers want users to click on malicious links in email which downloads malware granting hackers system access. The report notes that typosquatting was the main reason users were being tricked, a strategy used in Phishing Attacks. Typosquatting uses look-alike domains to fool users into clicking on links. Users land on identically formatted websites that steal their login credentials for the hackers to use. An example is “arnazon.com” instead of “amazon.com”. Now a hacker uses those stolen Amazon credentials to order merchandise delivered to their PO Box.

2020 Election Risks

The upcoming 2020 election opens up the opportunity for hackers to cause more trouble. This puts cybersecurity into the spotlight as the last line of defense against election tampering. Governments need to prepare and develop a strong cybersecurity program ahead of these elections. CyberHoot has a simple and effective set of recommendations for State and Local Governments to protect themselves.

State & Local Government Recommendations

According to Austin Berglas, Head of Romsomware/Incident Response at BlueVoyant, “State and local governments can take three immediate steps to improve their security postures”.

  1. Implement strong passwords.
    • Use unique 14+ character passwords/passphrases stored in a Password Manager.
  2. Two-Factor Authentication
    • Something you know (password), something you have (cell phone), something you are (fingerprint, face ID). Choose and use two of these to authenticate.
  3. Review and strengthen remote access
    • Ensure remote access ports automatically close after use
    • Enable Two-Factor Authentication on all remote access

Ransomware & Phishing Protection

CyberHoot also recommends the following additional actions to reduce the likelihood of falling victim to a Ransomware or Phishing attack:

  • Educate employees through an awareness training tool like CyberHoot
  • Phish Test Employees to keep them on their toes
  • Follow the 3-2-1 backup method for securing all your critical and sensitive data
  • Govern employees with cybersecurity policies
  • Purchase and train your employees on how to use a Password Manager
  • Follow proper Internet etiquette and protect others from phishing attacks using your domain name by setting up SPF, DKIM and DMARC records to block emails from using your domain name in their attacks.

No matter what sort of attack vector hackers are using, following these recommendations is a great starting point in building a strong defense-in-depth cybersecurity program.

Sources:

BlueVoyant Report on State and Local Government Attacks

GCN

GandCrab Ransomware

Phishing – CyberHoot Cybrary Term

via Technology & Innovation Articles on Business 2 Community https://bit.ly/3cqfWqa

Business Intelligence: How Organizations Use Employee Monitoring Data By Dale Strickland

It’s a common myth that employee monitoring software is for spying on employees and micromanaging how they spend their time at work. In reality monitoring employee computer activity provides companies with the insights they need to understand how their workforce operates. In this article I’ll provide you with examples of how organizations use employee monitoring data to improve their business intelligence.

Make data-informed management decisions

Business data is incredibly powerful for making informed management and business planning decisions. This data allows you to find trends and patterns that allow you to analyze existing business processes and improve them.

Monitor trends in employee productivity

Employee productivity report, BrowseReporter employee monitoring software

An organization’s historical computer usage data can be integrated into business intelligence tools such as Tableau or BigQuery to gain advanced insights into employee productivity trends. Employee performance monitoring and analytics provide insights on both a granular level (individual users or workgroups) and on a high level (departments, offices, and regions).

  • Do we have departments with consistently high utilization rates? Are they overworked? Is there an opportunity to grow the company in this area?
  • Are employees making use of the new software/solutions we recently implemented? If not, do employees need more training?
  • How engaged are our employees? Do they spend the majority of their time on-task?
  • Does the engagement rate of employees naturally decline at this time of year? Why?

By leveraging historical data businesses can better understand productivity and engagement trends throughout their workforce. This data can be combined with the datasets from other tools to produce valuable dashboards that the company uses to answer questions they may otherwise not have enough insight for.

Address disengagement and inefficiencies in the workplace

Quote: "Figuring out where we were losing productivity gave us a huge help when it came to increasing efficiency". The quote is by Larry Salvucci, IT Manager of Boston CenterlessSee how Boston Centerless skyrocketed their productivity with BrowseReporter

It’s perfectly natural for employees to occasionally browse the internet for non-work purposes throughout the workday. So long as employees are meeting their objectives and not causing harm to others this practice is often overlooked entirely.

You trust your employees. You probably don’t care about a modest amount of personal browsing as long as the total time they spend on unrelated tasks isn’t excessive. How do you define excessive usage? Even with a definition in place you can’t measure it without accurate browsing data.

Once a threshold is established, self-monitoring becomes an important practice for keeping personal computer usage within an acceptable range while giving employees the autonomy to manage themselves.

From a business intelligence point of view this very same data can show you which employees or departments consistently spend a significant amount of time on unrelated websites and applications. Consistent excessive browsing may be a sign that employees are underutilized or they may have excess downtime due to undiagnosed bottlenecks. With this data you can start asking the right questions to diagnose the issue at hand and reallocate resources as needed.

Understand how your remote employees work

A photo of a glowing model of earth. A mans hands surround the Earth. Icons representing data and internet surround them

A remote workforce provides unique management challenges. Monitoring the computer activity of off-site employees helps reduce the visibility gap and provide managers with actionable insights.

  • What time periods are our remote workers the most active?
  • Are time zone differences reducing the potential for collaboration due to a lack of overlap?
  • How much demand is being put on our VPN at a given time? How many logins can we anticipate?
  • How do our employees prefer to work – in bursts throughout the week or do they tend to follow a standard schedule?
  • Are our employees more productive working from home or in the office?
  • How many meetings are held each week and how much time is spent in meetings?

Monitoring remote workers provides organizations with the business intelligence insights they need to make educated resource management decisions. They can see exactly how employees are working and when they typically work so they can adjust their management style accordingly.

Analyze and improve resource use

When you’re in charge of managing a remote team or a large department it can be difficult to get a feel for how well your team is using the resources available to them. Employee monitoring software can generate computer usage reports that help managers understand how their employees are using the internet and computer applications.

See how your team communicates and operates

Image: Three people having a video conference together. Two are together in the same room, the other is a smiling man seen on a laptop screen.

  • Are employees regularly using the team chat platforms we invested in?
  • Which do employees use most often – team chat or email?
  • Are employees using multiple chat platforms throughout the day? Why?
  • Are employees getting the best use of company-provided laptops or are they going unused?

A shocking example of this is TechWiss Inc. They manage a distributed workforce across three different continents. One of their coders was repeatedly missing development milestones and they were faced with uncertainty as to why the developer was underperforming.

After reviewing the application usage data of the lone developer they realized that they weren’t using the company’s team chat platform as expected. When they looked into it they realized that the developer needed training and coaching to learn how to collaborate effectively with the software. The developers application usage history helped make this knowledge gap more apparent and prevented them from slipping through the cracks.

Shadow IT

A man sits at his desk working on a computer. The shadowy figure of a colleague looms behind him.

“Shadow IT” – also known as Stealth IT, Client IT, or Fake IT – is any system, solution, or software that’s used by the employees of an organization without the knowledge and approval of the corporate IT department. Research from Everest Group estimates that shadow IT comprises 50% or more of IT spending in large enterprises.

It’s true that unknown and unmanaged applications are a potential security vulnerability, though that’s going to be of more interest to your IT department. From a manager’s perspective, shadow technologies are often productivity boosting solutions that could be officially adopted for everyone’s benefit.

By monitoring computer usage managers can receive an overview of the programs and web-based tools used by their department. This gives them the opportunity to learn about these solutions so they can advocate for their official adoption. Once cleared by the corporate IT department these innovative solutions can be shared within the workgroup.

Software asset management

BrowseReporter computer monitoring software daily application usage by hour report with conferencing software examples

Tracking application usage is a critical component of an effective software asset management strategy. Underutilized software cost businesses in the US and UK an estimated $34 billion per year. With the increasing popularity of software-as-a-service solutions businesses need to understand the utilization rate of the software they pay for to save on operating expenses.

Application tracking gives you the data you need to determine if existing solutions need to be decommissioned or if a greater volume of licenses are required. For smaller teams much of this data can be readily found through team meetings but as the organization scales the increased visibility offered by computer monitoring software helps the company run leaner by identifying and decommissioning software that is no longer needed.

Bandwidth utilization

BrowseReporter Bandwidth Usage by Sites report with 13 different URLs listed.

Upgrading network infrastructure is time consuming and expensive. Oftentimes slow network speeds can be readily diagnosed by monitoring bandwidth usage for excessive consumption. For example, streaming a 4K Ultra HD Netflix video consumes 7GB of data per hour. Rather than investing in costly upgrades the company can provide guidelines for the acceptable use of technology in the workplace.

Bandwidth usage data provides an objective overview of how much bandwidth is consumed, which departments require the most bandwidth, and whether that bandwidth consumption is a result of genuine business need or as a result of a misuse of company resources.

Conclusion

Employee monitoring software collects valuable computer usage data. By integrating this data into existing analytics processes you can gain advanced insights into the behaviors of your entire workforce. These insights help you make data-informed decisions that improve the productivity and future potential of your business.

Improve Your Business Intelligence with BrowseReporter

via Technology & Innovation Articles on Business 2 Community https://bit.ly/3mJW1Hp

Top 10 DevOps Tools for Continuous Integration By Mitul Makadia

Continuous integration (CI) is the practice of merging all the software code changes and updates into a shared central repository/mainline. CI increases the efficiency of the team and encourages collaboration, by eliminating duplication and difficulties in incorporating code changes.

CI, a cornerstone technique of DevOps, eliminates the problems associated with long and tense manual software system integration. DevOps has been helping various organisations achieve success; and now, CI is enabling the automation of the software build process thereby providing anytime current builds for testing, demonstration or release purposes. Through its approach, CI allows teams to spend less time debugging and more time developing new features. Moreover, CI pushes developers to create modular and less complex code.

10 BEST DEVOPS TOOLS FOR CONTINUOUS INTEGRATION

While there are many DevOps tools for Continuous Integration, there are some that are more widely used. Selecting a best appropriate CI tool can be a bit challenging, more so if one is going to use it for the first time. Let’s have a look at the top 10 tools for Continuous Integration:

devops tools 2020

1. Apache Gump

Apache Gump is written in Python. It builds and compiles software code against the latest versions of projects. This allows Gump to detect incompatible modifications to that code within a short span of time (few hours) after such changes are uploaded onto the version control systems.

2. Buildbot

Buildbot is an open source CI tool which automates software integration, build and testing processes. It is written in Python over twisted libraries. Buildbot allows the running of the builds on a variety of operating systems like Windows, Linux, BSD, and OSX. Buildbot was constituted as a lightweight substitute to Mozilla’s Tinderbox project. It supports software configuration management (SCM) integration with software like SVN, CVS, Mercurial, Git, Monotone, and BitKeeper.

3. Bamboo

Bamboo is a CI tool developed by Atlassian. Bamboo is available in two versions, cloud and server. For the cloud version, Atlassian offers hosting service with the help of Amazon EC2 account. For the server version, self-hosting needs to be done. Bamboo supports well known Atlassian products, JIRA and BitBucket.

4. CircleCI

CircleCI is a CI tool hosted only on GitHub. It supports several languages, including Java, Python, Ruby/Rails, Node.js, PHP, Skala and Haskell. It offers services based on containers. CircleCI offers one container free, and any number of projects can be built on it. It offers up to five levels of parallelization (1x, 4x, 8x, 12x and 16x). Therefore, maximum parallelization of 16x can be achieved in one build. CircleCI also supports Docker platform.

5. Draco.NET

Draco.NET is a Windows service application created to enable Continuous Integration for DevOps. Draco.NET monitors the source code repository automatically rebuilds the project if changes happen and then emails the build result along with a list of changes since the last build. Draco.NET can check source control repositories like CVS, Visual SourceSafe, PVCS and SubVersion.

devops tools

6. GitLab CI

GitLab CI is hosted on the free hosting service GitLab.com, and it offers Git repository management function with features such as, access control, bug tracking, and code reviewing. GitLab CI is completely unified with GitLab and it can easily be used to link projects using the GitLab API. GitLab CI process builds are coded in the Go language and can execute on several operating systems such as, Windows, Linux, Docker, OSX, and FreeBSD.

7. Go CD

Go CD is a CI developed by the company ThoughtWorks. It is available for Windows, OSX, and Linux operating systems. Go CD implements the concept of pipelines which helps in making complex build workflows simple. It is designed from scratch, and hence, it supports pipelines and thereby removes build process blockages by enabling parallel execution of tasks.

8. Jenkins

Jenkins is a cross-platform open source CI tool written in Java. It offers configuration through both the GUI interface and the console commands. Jenkins is a very flexible tool to use because it offers an extension of features through plugins. Its plugin list is very broad, and one can easily add their own plugins to that list. Furthermore, Jenkins can distribute software builds and test loads on several machines.

9. Travis CI

Travis CI is an open source CI service free for all open source projects hosted on GitHub. Since Travis CI is hosted, it is platform independent. It is configured using Travis.Yml files which contain actionable data. Travis CI supports a variety of software languages, and the build configuration for each of those languages is complete. Travis CI uses virtual machines to create applications.

10. TeamCity

TeamCity is a Java-based sophisticated CI tool offered by JetBrains. It supports Java,Net and Ruby platforms. TeamCity has a range of free plugins available developed both by JetBrains and third parties. It also offers integration with several IDEs including, Eclipse, IntelliJ IDEA and Visual Studio. Moreover, TeamCity allows simultaneous running of multiple builds and tests in different platforms and environments.

We have all been accustomed with the essential tools required to implement DevOps; one of them is Continuous Integration. Now, the DevOps tools for Continuous Integration have advanced a lot since they were initially developed. The new trend of cloud migration has resulted in many companies offering cloud-hosted solutions that are more user-friendly and economical than traditional self-hosted tools.

Originally published here.

via Technology & Innovation Articles on Business 2 Community https://bit.ly/32TccKL

Cyber Insurance: Changing Dynamics in a Maturing Market By JC Gaillard

cyber insurance

Skills and data are building up, leading to less favourable conditions for negligent buyers

My company’s recent review of the Cyber Insurance market place, in collaboration with Cyber Solace, highlights a number of key elements.

The market has changed considerably since our first analysis in 2016, driven by non-stop cyber-attacks affecting all firms – large and small – and in particular by the spectacular rise in ransomware-related incidents, from Wannacry and NotPetya in 2017 to more recent Maze and Sodinokibi outbreaks.

The introduction of tighter privacy regulations such as GDPR in 2018 or CCPA has also contributed to the development of risk awareness amongst buyers, around sub-standard cyber security practices where personal data is concerned.

Generally, most actors across the cyber insurance sector have built up skills over the past few years – something which was clearly deficient back in 2016. Data – which was clearly lacking back in 2016 – is also starting to accumulate in meaningful ways, as the Cyentia Institute and Advisen have comprehensively highlighted in their last Information Risk Insight Study.

This is allowing new dynamics to emerge between buyers, brokers, agents and insurers.

A market less and less favourable to negligent buyers

Many buyers – in particular amongst small firms – are still looking at cyber insurance as some form of “silver bullet”: A way of transferring cyber risk in full without having to change existing practices.

The market is becoming less and less favourable to those negligent buyers.

In the past, insurers might have paid back some of their claims by fear of killing the market. They are less and less driven to do so: As skills increase and data-driven models give deeper insights, buyers have to expect to be more and more challenged around their cyber robustness.

Cyber insurance, as we were foreseeing to some extent as far back as 2015, could be in the process of becoming an incentive mechanism driving adherence to security good practices in order to ensure pay-backs by insurers, in the face of cyber-attacks which have now become plainly a matter of “when”, not “if”.

The threat of “silent cyber”

However, over the past few years, driven by the skills imbalance within the market which we highlighted back in 2016, a number of legacy practices have created a potential storm around the cyber insurance market at large, which the current COVID-19 crisis can only aggravate.

Cyber insurance was rarely sold as a standalone policy. Many cyber insurance policies have been effectively “buried” within other policies, and their diversity in terms of language, coverage or exclusions remains staggering.

This “silent cyber” problem is turning into a nightmare for many insurers and re-insurers who are finding it increasingly impossible to estimate accurately the amount of cyber risk they actually carry, once again in the face of non-stop cyber-attacks, and now with the COVID-19 crisis aggravating the situation and also punching a multi-billion hole in their pockets through business interruptions payments.

The extent of this could be very significant and may end up creating a systemic risk event, over which the financial regulators would have to intervene.

Overall, even if it continues to be shaken by regulatory challenges or court cases (many high profile lawsuits are still unresolved), the cyber insurance market is emerging out of immaturity, insurers are effectively paying back, and cyber insurance is becoming a strong measure for CFOs and CEOs to consider in their arsenal of protective measures against cyber threats, as long as they remain otherwise committed to adherence to cyber security good practices.

via Technology & Innovation Articles on Business 2 Community https://bit.ly/3025T5M

Network Security Post-Pandemic: Key Measures to Secure Internet Network for Your Business By Joseph Chukwube

With the inception of next-gen technologies such as the Internet of Things (IoT) and Big Data, the network security landscape has become more complex and vulnerable in recent times. As a result, there has been an ever-increasing jump in the number of internet security threats.

This includes malicious malware, attacks focused on IoT devices and closed networks, phishing attacks, among many others.

Over time, attackers have evolved to become more technologically informed and have resorted to new and advanced methods of planned attacks on internet security components.

While network security is a challenge that we will continue to face, there are some very effective measures that can help individuals and organizations keep up with changing technologies and the subsequent network security threats that we might encounter.

Key Measures to Secure Internet Network

  1. Anomaly Detection

Once you identify how your network works and reacts to threats, it becomes easier for you to identify any potential risk in the form of anomalies. A network Anomaly Detection Engine (ADE) enables users to analyze and understand the behavior of their network. This helps in coming up with quick mitigation strategies to avoid any loss of confidential data. Hence, businesses should be looking to implement ADE in their security solution.

  1. Access Control

It is a known fact that if your network is secured, there are critically fewer chances of your network getting breached by unauthorized personnel. However, at times, even authorized personnel can cause potential threats to the system.

To avoid such network security breaches, the Access Control methodology enables users to secure networks by limiting the access rights of authorized users. Basically, it allocates resources and accesses to individual users strictly based on their responsibilities and, thus, limits the chances of any potential security threats to the network.

  1. Next-Gen Virtual Private Network Security

Virtual Private Network remains one of the most effective ways of ensuring network security. VPN transfers data in the form of data packets which consists of encrypted data coupled with a header. This header consists of separate routing information.

This coupling of encrypted data and routing information results in the creation of secure data packets that can be delivered over a public/shared network. These data packets cannot be read without the decryption keys, which makes the transmission of data secure.

One of the primary benefits of using a VPN for network security is the fact that users can connect and transmit data over networks, both from their home networks and while they are on the go. This functionality of VPNs help improve Wi-Fi security as well.

Another great benefit of securing your network using a VPN is that it also reduces all security risks from third parties. This is particularly beneficial for corporate employees who work from their respective homes, while in transit using mobile phone Wi-Fi and public networks, and most importantly, while sending and receiving confidential corporate information from third-party clients.

VPNs platforms like ExpressVPN and NordVPN are known for securely masking the user’s IP address, hence making them immune to all kinds of network security threats on the internet. They also offer high speed and do not track user activity or keep logs of traffic and visited sites. These features are what make the VPN technology a recommended security solution to date.

Asides the above-mentioned features, however, next-gen VPN platforms like Switcherry and Hotspot Shield are looking to implement even more advanced technology to their platforms. These technologies include SDPs and SWGs.

Software-Defined Perimeter (SDP) creates an isolated network connection for different users within the same network. Secure Web Gateway (SWG), on the other hand, blocks user access to malicious traffic.

Hence, no doubt the implementation of these technologies to the already existing security solutions of VPNs will lead to a strengthened security solution with a solid zero-trust framework.

This points to the fact that the Virtual Private Network technology is not going away any soon and that it’s also making changes and adjustments to contain the fast-advancing next-gen cyber threats.

  1. Array of Firewalls

Firewalls can be understood as virtual gates, which act as a security parameter between your network and the internet. There are different kinds of firewalls catering to different user needs, such as managing private networks, blocking unauthorized access, blocking access to malicious websites and links, as well as allowing authorized traffic to access the internet with all security measures taken care of.

Some of the popular kinds of firewall are:

  • Proxy Firewall
  • Next-Generation Firewall
  • Threat-Focused Firewall
  • Stateful Inspection Firewall
  • Unified Threat Management Firewall
  1. Setting Cyber Security KPIs

This is a rather complex yet compelling method of ensuring the safety of your network. All organizations/individuals must have a defined set of Key Performance Indicators (KPIs) for tracking and improving the security of their public and private networks. One must track important KPI factors, such as Mean Time To Detect (MTTD) and Mean Time To Contain (MTTC) to study and reflect on the effectiveness of the security measures undertaken from time to time.

Final Words

Given the extensive nature of threats to network security, individuals, as well as organizations, need to keep track of their cybersecurity indicators at all times. They must also invest in the latest upgrades and most effective security solutions to safeguard their networks.

via Technology & Innovation Articles on Business 2 Community https://bit.ly/2ZXou2M

Why CEOs Need to Learn to Code By Paul Lipman

Until last year, I hadn’t written a line of code since developing COBOL mainframe systems in the 1990s. As the CEO of a cybersecurity company that makes extensive use of machine learning (ML) in its products, I am interested in truly understanding how this transformative technology works. While I’ll never code for a living, the learning process has taught me more than just coding.

Open Source Opens Doors

I started by learning to code in Python, the most commonly used programming language in the ML field. From there, I made use of a number of open source resources including Jupyter Notebook, TensorFlow, and Keras. All of these powerful and comprehensive tools are freely available. Their easy access has opened the door to advanced machine learning technology for many – reducing the barrier to entry and as a result, accelerating the pace of innovation across many industries and communities.

Quantum computing is another field that is being enabled, accelerated and transformed by open source. The 2017 release of the open-source framework Qiskit by IBM has paved the way. Over the last few months, I used Qiskit to program IBM’s quantum computers. IBM currently makes Qiskit freely available for use via the cloud through the IBM Quantum Experience. Ultimately this raises interest and the bar for its entire quantum ecosystem.

Company Culture

When the CEO learns to code, it doesn’t just benefit the CEO. When working with our team, I’ve found that I am able to ask better questions – the right questions – of our engineering and product teams. The ability to understand technology at a detailed, hands-on level provides invaluable insight. Even though I’m no coding expert, learning to code has given me the context to better articulate the benefits and competitive differentiation of my company’s products. Looking toward the future, I feel I will be able to better assess upcoming technology and the implications for strategy and investments.

As an added bonus, I believe that there is power in leading by doing. If a CEO can do this in his or her spare time, anyone can.

Impact on Cybersecurity

The impact of machine learning on cybersecurity is readily apparent. Traditional cybersecurity approaches cannot handle the volume and complexity of new threats. With machine learning we can build dynamic models that continuously learn from the characteristics and behaviors of malicious activity across millions of devices worldwide. In this way, we can keep up with cybercriminals who are also building increasingly sophisticated attacks using machine learning.

While it’s always enjoyable to learn something new and to better understand your technical team members, there is one last, crucial thing learning to code has given me: a fresh outlook as a leader. I feel my mindset has shifted to be more experimental and explorative, my creativity has expanded, and best of all, I’m more open to new ideas. It’s a great place to be as a leader.

via Technology & Innovation Articles on Business 2 Community https://bit.ly/2HjPPps

Protect Your Office 365 Data Against Consent Phishing By Matt McDermott

The rapid growth in remote work in the wake of the COVID-19 pandemic has led to a massive increase in the use of Microsoft Office 365 as both a platform for collaboration and as a repository for sensitive and confidential information.

As a result, we are seeing more apps being used with Microsoft’s identity platform to increase collaboration and productivity. This includes apps like SharePoint, OneDrive, Microsoft Teams, Power BI and more.

For the most part, individual productivity has increased in this new era of remote working – no arguments there. However, this increase in productivity may come at a price. Hackers are constantly looking to take advantage of unprecedented app usage by wet-behind-the-ears remote employees working on networks devoid of company firewalls and other safety measures.

While email phishing and credential compromise are popular and efficient attacks, hackers have stepped up their game by using application-based attacks like consent phishing. These attacks depend on the OAuth 2.0 protocol wherein sensitive data is accessed not by stealing your password but by tricking you into giving malicious apps the necessary permission to access your Office 365 data.

Here’s how consent phishing works:

  • First, a malicious link is sent through conventional, email-based phishing or a non-malicious website.
  • Once the user clicks the link, an authentic consent prompt appears asking for permission.
  • If the user clicks ‘Accept’, malicious apps will gain permission to access emails, forwarding rules, files, contacts, notes, profiles and other sensitive data.


Image courtesy of Microsoft.

Protect Your Office 365 Data

When using apps with your Office 365, always opt for apps from an Azure Verified Publisher.

What does it mean when an application publisher is Azure Verified?

It’s a simple, blue verification badge that appears on the application consent prompt. The blue tick signifies that Microsoft has vetted the application publisher and verified that they are a Microsoft partner and legitimate business entity.

It empowers admins to protect users from consent phishing attacks by limiting their access to non-verified apps. Microsoft even provides steps your business can follow to ensure all apps in your Office 365 tenant come from Azure Verified Publishers.


Image courtesy of Microsoft.

Back Up Your Office 365 Data with an Azure Verified Publisher

Although many Office 365 backup solutions in the market claim to back up data safely, most don’t feature the blue tick. It’s this dangerous irony that puts your data and your business at risk. The truth is, keeping a copy of your entire Office 365 data on a cloud backup that is NOT Azure verified is an open invitation for consent phishing and other such attacks.

via Technology & Innovation Articles on Business 2 Community https://bit.ly/2Hf0yBt