What You Need to Know about Ransomware Attacks and Office 365 By Kat Karpinski

Over the past two years, the rise in number and sophistication of ransomware attacks has been meteoric. A recent report released by the U.S. Department of Justice revealed that ransomware attacks quadrupled from 1,000 attacks per day in 2015 to more than 4,000 attacks daily in 2017.

While Microsoft products and services have been targeted by hackers for decades, now that Office 365 is the company’s fastest-growing solution, it has become a primary target. According to Jason Rogers, Microsoft’s lead threat protection Program Manager for Office 365, in 2016 alone, Microsoft saw malware attempts targeted at Office 365 increase 600 percent. See the recent news below:

Ransomware attacks are not cheap. Cybersecurity Ventures predicts that ransomware damage costs will exceed more than $5 billion in 2017. Yes, that’s $5 billion. These costs include but are not limited to:

  • Damage and/or data loss
  • Downtime and lost productivity
  • Ransom (cryptocurrency) if an organization decides to pay the hacker
  • Forensic investigation
  • Restoration and deletion of hostage data and systems
  • Brand and reputation

With such great risk for loss at the hands of hackers, every member of your organization— from your COO to each individual employee —must take proactive measures to protect their data.

The Anatomy of a Ransomware Attack

At the highest level, there are three main components to most ransomware attacks:

  1. Find a way in
  2. Land and expand
  3. Encrypt and ransom

Find a way in: Often the easiest way to trigger a ransomware attack is social engineering, which requires tricking an end user into opening an email that contains ransomware and which executes malicious code. Ransomware will masquerade as links to software updates or as macros. The Cerber ransomware attack, for example, targeted Office 365 and flooded end users’ inboxes with an Office document that invoked the malware via macros. Ransomware also commonly exploits a software vulnerability. The WannaCry attack was engineered to take advantage of a Microsoft vulnerability. Although Microsoft released a patch in March 2017 to address this vulnerability, and then released a second patch on May 13th to stop WannaCry, it was not applied by many Microsoft customers; rendering them victims of the largest ransomware attack to date. Scripting or APIs can also act as entrance points to your system if you are in the cloud. Finally, compromising a user’s password or PII, and acting as a legitimate user is a common technique for hackers to find a way into your organization.

Land and expand: Once your organization’s system has been breached, ransomware is built to expand quickly, locking down as much of your system as possible. Ransomware can be programmed to search for critical files locally, on the network, and in the cloud. It can contact command and control services, and finally, it can utilize access to spread to other machines. With Office 365 and other cloud apps, ransomware can easily propagate through sharing. Collaboration tools such as SharePoint Online and OneDrive for Business can inadvertently spread ransomware across multiple users, systems, and shared documents. The impact can be full access to your organization’s data, email, and potential data leaks or data destruction.

Encrypt and ransom: Finally, ransomware, unlike other types of malware, will encrypt your files or lock down your system. Infected end user devices will receive a message that their data is being held ransom. Hackers typically demand payment in cryptocurrency to unlock or release victims’ systems and data. However, there is no guarantee that the hacker has not damaged your data or will return control to your organization. Often as not, your data is destroyed and inaccessible even after ransom has been paid.

Data Protection and Office 365

With the number of 4,000 attacks per day looming in the back of your mind, how do you successfully prevent ransomware from breaching your organization? There is no silver bullet or single solution to protect you. For Office 365 and other cloud apps Spanning recommends a layered approach. The NIST Cybersecurity Framework is a great place to start if you don’t already have a plan in place. The three pillars highlighted below are most crucial and require evaluation when moving your critical business data to a SaaS application. End user training is also critical, as end users are often the “malware gateway” into your organization.

Backup and Recovery Chart

Backup and restore solutions

It’s vital to have healthy processes in place to protect critical business data before an attack happens. Implementing a trusted backup and recovery solution is a proactive means of protecting your data and your organization’s productivity from cyber-attacks such as the Cerber ransomware attack. If you do suffer an attack, your organization must be able to get back up and running quickly. Backup solutions such as Spanning Backup for Office 365 can restore your critical business data to the last ‘clean’ version before the attack occurred. This restore capability also minimizes the hefty cost of employee downtime as well as eliminates the need to pay ransom.

Backup and restore solutions built for cloud apps ensure you can recover your data and bypass dealing with the ransomware attack altogether.

Learn More

View “How to Recover from a Ransomware Disaster” presented by Mat Hamlin, VP of Products, and Brian Rutledge, Senior Security Engineer.

http://ift.tt/2ASn63J

via Technology & Innovation Articles on Business 2 Community http://ift.tt/2AlQEFT

Advertisements

Artificial Intelligence: Man With Machine By Aaron Kim

In my last Biznology post, Man versus machine, I discussed how advances in AI are feeding a growing fear that machines may soon be doing our jobs better than us, and making us all obsolete. This discussion is as old as technology itself, and one can argue that the fundamental objective of technology is exactly that: do something better than what is possible today, be it human-driven or machine-driven. The novel part of this is how fast it’s happening, potentially forcing us away from our jobs several times before we retire. Some people even predict a future where work as we know today will no longer be required. Others try to reconcile the two narratives and bet on a hybrid future, where humans benefiting from AI advances can do work that we don’t even imagine today.

One aspect often neglected in these discussions is that AI and machine learning are not the only areas seeing remarkable advances these days. In Walter Isaacson‘s biography of Steve Jobs, the Apple CEO is quoted as saying about his disease: “I’m either going to be one of the first to be able to outrun a cancer like this, or I’m going to be one of the last to die from it.” When writing about the legacy Jobs had on personalized medicine, Antonio Regalado, from the MIT Technology Review, had an interesting take on the link between genetics and the digital revolution:

DNA is a profoundly digital molecule. And now that it’s become very cheap to decode, genetic data is piling up by the terabyte.

For all the complexity of our body’s biochemistry, the simplicity and elegancy of our genetic code is remarkably similar to the binary computer representation. You can pretty much describe a live being full genetic makeup through sequences made of the 4 nucleotide letters: A, T, C, and G. We already have technology that can quickly read key parts of our genetic sequence, and there’s emerging techniques like CRISPR, that hold the potential to allow scientists to edit parts of that code.

Neuroprosthetics is another area where new developments are redefining what is possible for a human with a machine to do. From augmenting our visual and auditory capabilities to technology to enhance human intelligence, we may be seeing learning being reinvented over the next few decades.

Finally, there is significant buzz around nootropics, as known as cognitive enhancers, drugs that are said to augment or accelerate our ability to learn new things.

Naturally, I’m not saying all these advances are necessarily a good thing. There will be plenty of ethical concerns to be addressed as new technologies enable us to become real-world Tony Starks in a not-so-distant future. Paying too much attention to the digital revolution in its strict computing sense can prevent us from noticing that there is a bigger world of science out there, which may ultimately redefine what means to be artificially intelligent.

via Technology & Innovation Articles on Business 2 Community http://ift.tt/2Bix5lu

Artificial Intelligence: Man With Machine By Aaron Kim

In my last Biznology post, Man versus machine, I discussed how advances in AI are feeding a growing fear that machines may soon be doing our jobs better than us, and making us all obsolete. This discussion is as old as technology itself, and one can argue that the fundamental objective of technology is exactly that: do something better than what is possible today, be it human-driven or machine-driven. The novel part of this is how fast it’s happening, potentially forcing us away from our jobs several times before we retire. Some people even predict a future where work as we know today will no longer be required. Others try to reconcile the two narratives and bet on a hybrid future, where humans benefiting from AI advances can do work that we don’t even imagine today.

One aspect often neglected in these discussions is that AI and machine learning are not the only areas seeing remarkable advances these days. In Walter Isaacson‘s biography of Steve Jobs, the Apple CEO is quoted as saying about his disease: “I’m either going to be one of the first to be able to outrun a cancer like this, or I’m going to be one of the last to die from it.” When writing about the legacy Jobs had on personalized medicine, Antonio Regalado, from the MIT Technology Review, had an interesting take on the link between genetics and the digital revolution:

DNA is a profoundly digital molecule. And now that it’s become very cheap to decode, genetic data is piling up by the terabyte.

For all the complexity of our body’s biochemistry, the simplicity and elegancy of our genetic code is remarkably similar to the binary computer representation. You can pretty much describe a live being full genetic makeup through sequences made of the 4 nucleotide letters: A, T, C, and G. We already have technology that can quickly read key parts of our genetic sequence, and there’s emerging techniques like CRISPR, that hold the potential to allow scientists to edit parts of that code.

Neuroprosthetics is another area where new developments are redefining what is possible for a human with a machine to do. From augmenting our visual and auditory capabilities to technology to enhance human intelligence, we may be seeing learning being reinvented over the next few decades.

Finally, there is significant buzz around nootropics, as known as cognitive enhancers, drugs that are said to augment or accelerate our ability to learn new things.

Naturally, I’m not saying all these advances are necessarily a good thing. There will be plenty of ethical concerns to be addressed as new technologies enable us to become real-world Tony Starks in a not-so-distant future. Paying too much attention to the digital revolution in its strict computing sense can prevent us from noticing that there is a bigger world of science out there, which may ultimately redefine what means to be artificially intelligent.

via Technology & Innovation Articles on Business 2 Community http://ift.tt/2Bix5lu

Managing Risk or managing risks? By JC Gaillard

The keys to a successful second line of defence

There are many risk management methodologies in existence but it is not uncommon to come across large firms still following today simplistic, dysfunctional or flawed practices, in particular around operational risk management.

The main issue with many of those approaches is that they are plagued by a fundamental theoretical issue, which goes far beyond semantics: There is an abyss between managing “Risk” (broadly defined as “the impact of uncertainty on objectives”) and managing “risks” (events or scenarios that might have an undesirable outcome).

But many practitioners, when faced with the challenges of establishing a second line of defence type-of-function, still follow the path of least resistance and start with the arbitrary definition upfront of a series of “risks”, that are generally collected through workshops with senior executives in the business. In practice, that’s where many aspects start to go wrong, driven by a short-termist business agenda or a complacent “tick-in-the-box” management culture around compliance.

The dynamics of those workshops often revolve around “what keeps you awake at night” type of discussions, which force the participants to imagine situations where something could go seriously wrong and hit the firm. Participants generally engage with the process based on their own experience and ability to project themselves. Almost always, they draw on past experiences, things they have seen at other companies (in other jobs) or things they have heard of. Rarely are those stories based on hard facts directly pertinent to the firm and its problems. It often results in organic and very rich exchanges but also leads to an avalanche of scenarios, unstructured and often overlapping. The lack of rigour in the approach also results in most cases in a considerable language mix-up, with the description of the so-called “risks” combining shamelessly threats, controls and other elements – internal or external.

Then follows a second phase during which participants are asked to estimate how likely are those scenarios to affect the firm and what could be the resulting financial loss.

The first part (“how likely are those scenarios to affect the firm”) is plagued by a fundamental confusion between frequency and probability (in many cases, this is entirely by design i.e. participants being asked “could this happen weekly, monthly, annually?”). Again, participants tend to engage with the question by drawing on past experiences (the “bias of imaginability” theorised by Kahneman) or things they have seen elsewhere, irrespective of the actual context of the firm itself. At best, it results in “educated guesses”; at worst, we end up in pure “finger-in-the-air” territory.

The assessment of the potential financial losses is often more reliable, as this is an area where most of the senior executives involved would have more experience, and as long as the monetary brackets are wide enough, they are likely to put the various scenarios in the right buckets.

At the back of that, a risk “heat map” is drawn, a number of action plans are defined and a budgetary figure is put on each (in terms of the investment required to have an impact of the risk map). This is the point where risk is either “accepted”, “mitigated” or in theory “transferred”.

In practice, the impact of the proposed scenarios on the risk map is often estimated and rarely quantifiable, and the whole process is simply used to drive or justify a positive or negative investment decision, or to present an illusion of science to auditors or regulators.

The agreed actions are then given to a project manager or to a programme office to supervise, often with some form of progress reporting put in place back to a risk committee, with all sorts of convoluted KPIs and KRIs wrapped around it.

This whole approach is certainly better than doing nothing, but it is flawed at a number of levels. Essentially, it is vulnerable to political window-dressing from start to end, and the various estimations made by senior executives along the chain (willingly or unwillingly) can be used to adjust to any internal political agenda (e.g. presenting a particular picture to regulators, limiting expenditure, not having to confront boards or business units with an inconvenient truth).

Fundamentally, the “risks” being (allegedly) “managed” may have nothing to do with the actual reality of the firm, and even the “management” aspects may be disputable, in particular if the governance around the actual delivery of the agreed action plan is weak or inefficient (or, at the other end of the scale, bureaucratic and overly complex). This is more about “doing stuff” (at best) than “managing Risk” because of the colossal amount of assumptions made along the way.

There are 3 aspects that need to be addressed for those methods to work better and deliver proper results in terms of real “Risk Management”:

1- Talking to senior executives and running workshops with them is a good start, but they should be focused on “threats” – and not “risks” – and on the “assets” the “threats” may target. Focusing on threats and assets brings advantages at 2 levels: First it roots the language of the discussion in the reality of what is at stake, instead of hypothetical scenarios. Second, by following simple threat modelling practices, it offers a structure to guide the discussion with some rigour:

  • Who are the people or organisations who could cause you harm? (the threat agents)
  • What are their motivations? Their level of sophistication? The attack vectors they use? The attack surfaces they look for?
  • What could they do to you?

By combining and ranking those factors, you arrive to a number of key scenarios that are rooted in the reality of the firm and its context, and in the process, you have forced the executives involved to face the reality of the firm, the world it operates in, and its real viciousness.

But for the result to be truly representative and meaningful, it is also essential to ensure that all stakeholders are involved across all geographies and corporate silos (business units, IT, Legal, HR, procurement, etc…), and to include key external business partners where business processes or IT facilities have been outsourced.

2- Asking executive management to place the resulting scenarios is broad financial loss buckets is a good step that is likely to work well as we indicated before, and could be kept, but the assessment of any form of probability of occurrence or potential impact should be dissociated from the discussion with executives at this stage and, again, firmly rooted in the reality of the firm through an independent assessment of the actual presence or absence of the necessary protective measures.

This is essential in focusing management on the fact that “managing Risk” is about protecting the firm from undesirable outcomes, and that it is achieved through the actual implementation of tangible measures that are known to protect, and can be:

  • determined upfront based on the identified threat scenarios,
  • mandated by policy or adherence to good practice,
  • enforced through good governance, internally and with third-parties.

Risk is a by-product of the presence or absence of such measures, and the actual Risk “heat map” for the firm can be drawn in a quantified manner from those independent assessments, instead of being estimated.

3- Once the Risk “heat map” is firmly linked to the presence or absence of actual protective measures, it is possible to define risk treatment scenarios also linked to those measures and map in a quantified manner the impact they would have on the Risk “heat map”.

managing risk

It is then possible to compare those Risk treatment scenarios and determine the most attractive for the firm. It also becomes possible to track and visualise progress in a quantifiable manner.

It is easy to argue that the governance issues around the actual delivery of agreed Risk treatment actions still remain (in particular for larger firms), and that the two approaches are fundamentally the same (one qualitative, and the other quantitative), but the quantitative approach is truer to its purpose (“managing Risk”), considerably richer in terms of managerial levers, and far less vulnerable to manipulation and window-dressing.

via Technology & Innovation Articles on Business 2 Community http://ift.tt/2BZ2BBF

Online Education Business as a Tool to Boost Economy By Melissa Burns

The e-learning is on its rise. In the USA about 77% of companies offer online courses to help train their employees with 81% of learners partaking in the online study for personal development. Today it is possible to get a quality business knowledge from the leading professionals without going to college or even leaving your room. Online courses probably will not guarantee the same effect as attending top business schools and will not necessarily give you a high paid job right away. However, e-learning can give you important skills that students usually gain from top business schools.

Whether it is a company that wants to incorporate an online education program or a student who wants to get an MBA, the advantages of e-learning compete with those of offline education.

Having become the biggest revolution in modern education, e-learning has made a big change in the economy and opened a lot of opportunities for people from around the world.

So why online education is important to boost the economy?

Advantages of E-Learning for Economy

Online Courses are Personalized and Flexible

Not every person learns the same way. Although the material for each course is standard for everyone, each individual can control the pace of education. Due to the flexibility provided by e-learning each individual can take part in the educational process from any corner of the world with a computer and an internet connection. This makes the entire process easier and helps to avoid problems where the course will take place.

According to a research made by the U.S. Department of Education: “on average, students in online learning conditions performed modestly better than those receiving face-to-face instruction.” The report mentioned benefits in studies in which online learners spent more time on task than students in the face-to-face condition. However, it is important to realize that an online lesson will not be easier than one taken offline. Motivation is still a key to any successful education process.

Online Courses are Cost Effective

With e-learning, students can save hundreds of dollars by economizing on expenses that are associated with attending classes. No more expensive books, costs for transportation, babysitting and other expenses that go with traditional offline courses.

Online Courses are Beneficial to Economy

While online education is undoubtedly beneficial to learners, the question remains, whether the increase of online courses and the rise of student enrollment in e-learning is helpful for the economy.

It goes without saying that online courses development transform the traditional approach to education and adds a new business sector, which translates into more jobs for online instructors, web developers, and administrators. So how can it impact the rest of an economy? With a rising popularity of online courses, employers become more eager to hire graduates. E-learning is gradually replacing the traditional education and producing a higher level of online-educated students which can potentially result in an economic boom.

Opportunities for flexibility, cost-effectiveness, and economic development are among those variables that have influenced the online learning process. Online courses have a big potential to improve and change education process as well as benefit the economy. Being less expensive and more flexible online education becomes more and more attractive to learners.

For e-learning to succeed it is also important to find and prepare instructors who will be ready to engage in this new educational process. Luckily, modern technology enables instructors to develop new ways to teach students online in the ways that are much more effective than face-to-face classes. The impact of online education is probably not so noticeable right now, however, if the pace of e-learning remains the same it can result in an economic lift.

via Technology & Innovation Articles on Business 2 Community http://ift.tt/2nUYNPB

The Rise of AI Capable Smartphones By Mitesh Patel

OVER HALF A BILLION SMARTPHONES WILL BE SHIPPED WITH ‘OUT-OF-THE-BOX ON-BOARD AI CAPABILITIES’

According to the latest research from Counterpoint’s Components Tracker Service, one in three smartphones to be shipped in 2020 will natively embed machine learning and artificial intelligence (AI) capabilities at the chipset level. Apple, with its Bionic System on Chip (SoC), proliferating across its complete portfolio over the next couple of years, will drive native AI adoption in smartphones, says the report. “Its universal adoption of AI-capable SoCs will likely enable Apple to lead the AI-capable chip market through 2020.” Huawei, with its HiSilicon Kirin 970 SoC, launched in September and finding application in the Huawei Mate 10 series launched recently in Munich, is second to market after Apple with AI-capable smartphones. The Huawei Mate 10 is able to accomplish diverse computational tasks efficiently, thanks to the neural processing unit at the heart of the Kirin 970 SoC.

However, Qualcomm will unlock AI capabilities in its high to mid-tier SoCs within the next few months points out the study. “It should be able to catch-up and is expected to be second in the market in terms of volume by 2020, followed by Samsung and Huawei.”

“Apple, with its Bionic System on Chip (SoC), proliferating across its complete portfolio over the next couple of years, will drive native AI adoption in smartphones”

Machine learning and AI have not made major headway in mobile applications until the second half of 2017 due to the limited processing power of smartphone CPUs, meaning the user experience would have been hindered. AI applications require huge amounts of data processing even for a small task. Sending and receiving that information from cloud-based data centers is potentially difficult, time-consuming and requires a solid connection, which is not always available. The answer is to have the AI-capability on-board the device.

Commenting on the analysis, Counterpoint Research Director, Jeff Fieldhack noted, “The initial driver for the rapid adoption of AI in smartphones is the use of facial recognition technology by Apple in its recently launched iPhone X. Face recognition is computationally intensive and if other vendors are to follow Apple’s lead, they will need to have similar onboard AI capabilities to enable a smooth user experience.”

Adding to this, Research Director, Peter Richardson, said, “With advanced SoC-level AI capabilities, smartphones will be able to perform a variety of tasks such as processing natural languages, including real-time translation; helping users take better photos by intelligently identifying objects and adjusting camera settings accordingly. But this is just the start. Machine learning will make smartphones understand user behaviour in an unprecedented manner. Analysing user behaviour patterns, devices will be able to make decisions and perform tasks that will reduce physical interaction time between the user and the device. virtual assistants will become smarter by analysing and learning user behaviour, thereby uniquely serving each user according to their needs. This could potentially help virtual assistants take the leap and become a mainstream medium of interaction between the user and device.”

Native AI can also effectively counter the increasing security threat smartphones are facing by things like, real-time malware detection, recognizing user behaviour to identify if the phone is being misused, analysing email and other apps for things like phishing attacks.

There is also growing potential in for AI-capable devices to play a key role in healthcare. Machine learning algorithms can be used to generate health and lifestyle guidance for users by analysing combinations of sensor data and user behaviour.

‘Overall, we expect AI-capable smartphones to proliferate rapidly at the top-end of the market, but to relatively quickly filter into mid-range devices from the mid to latter part of 2018. By 2020, we expect over one-third of all smartphones shipping to be natively AI capable.

via Technology & Innovation Articles on Business 2 Community http://ift.tt/2AfnvMo

Technology Hijacking Your Brain and Killing Your IQ By Kris Green

TeroVesalainen / Pixabay

How often does technology interrupt us from what we really mean to be doing? At work and at home, we spend a startling amount of time distracted by notifications and pop-ups. Instead of helping us spend our time well, it often feels like tech is stealing hours away from us.

Why does it take place? Because in the world overloaded with data, our attention is the major good that advertising companies want to sell and brands want to buy. Social networking apps are the prime location to steal this attention away. So, what are the advertising companies doing to attract this attention and what are we getting as a result?

Social Apps—>Addiction and Stress

In 2009, Facebook released the “Like” button—to “send little bits of positivity” across the platform.

Like many user interface changes, the introduction of the “Like” button was meant to solve a problem. Facebook has collected and continues to harvest valuable data for advertisers from this sentiment data. Facebook is able to learn what things grab users’ attention while the network’s users enjoy the short-term shot of dopamine they get from giving or receiving social affirmation.

In 2017, less than a decade after the introduction of the button, “Like” inventor Mr. Rosenstein shared his concern about social media addiction, which he compared with heroin and described “Likes” as “bright dings of pseudo-pleasure”. Personally, he has blocked himself from Reddit and Snapchat, while imposing strict time-limits on his use of Facebook.

The addictive feedback loop that social websites thrive on has become a constant dispensation of positive affirmation that decays into self-doubt.

Add to that fear of missing out, bullying, and the harmful effect on sleep, and you have a wicked cocktail brewing. These are side effects of using Snapchat, Facebook and Twitter, according to a survey by the Royal Society for Public Health and the charity Young Health Movement.

Creepily, the understanding of human personality from social media behavior doesn’t stop there. Advertisers are getting in on the psychology of social media. Creepily, data harvesting specialists from Britain and the US can successfully determine with just one Facebook “Like” potential consumer’s personality type—introvert, extravert—and target ads accordingly.

Aggressive Advertising and Pop-ups—>Distraction

It seems too that advertisers understand how vulnerable humans are to interruption—pop-ups are the advertiser’s favorite tool. Pop-ups can increase website conversions by 2100 percent.
The Swedish researcher Nils Holmberg measured how well children in two age groups were at fixing and controlling their gaze. Altogether, 45 children were instructed to ignore a spot that popped up and as quickly as possible look to the other side of the screen. Nine-year-olds managed this just two times out of ten. Twelve-year-olds were better at concentrating. Previous research has shown that adults manage to control their attention up to eight times out of ten.
Interruptions come in many forms, however, as any advertiser worth their salt will confess. Using basic lessons from advertising psychology, marketers leverage images, text, color and context to make sure their ads cut through your concentration.

According to StopAd research 12.7 percent of banner ads on more than 1000 popular websites use aggressive colors and 41.6 percent use aggressive language to distract users and grab their attention.
One of the biggest complaints about pop-up and aggressive advertising is that they lower the ability to concentrate and interrupt you as you’re trying to accomplish something. Perhaps because of the the advertising’s effectiveness, pop-ups and aggressive ads are consistently the most despised kind of advertisement. The loathing often drives people to use ad blockers to get rid of them.

Clickbait Content—>Biased Opinion and Fake News

To demand our attention, advertisers and media started to use special headlines designed to make readers want to click on a link or a banner ad. They employ a number of effective cognitive tricks for this.

And it works.

In a recent paper called “Breaking the News: First Impressions Matter On Online News,” two researchers looked at 69,907 headlines produced by four international media outlets in 2014. After analyzing the sentiment polarity of these headlines (whether the primary emotion conveyed was positive, negative, or neutral), they found that “a headline has more chance to [receive clicks] if the sentiment expressed in its text is extreme, towards the positive or the negative side.”

Striving to make headlines more emotional, content creators began to use misleading language and distorted facts.

All too often, especially when it comes to science news, we see headlines that are directly contradicted later on in the article. For example, have you ever seen an article with a headline like “Air Pollution Is A Leading Cause of Lung Cancer” which has only one quote in the middle of the article that points out that “other things have a much bigger effect on our risk [of lung cancer], particularly smoking.”

During the last year, the fake news trend clearly demonstrated how a clickbait headline can impact the lessons you take away from what you read.

Mobile Devices—>Lower IQ

According to a study published last year, we touch our phones about 2,617 times a day.

“For the heaviest users—the top 10 percent—average interactions doubled to 5,427 touches a day. Per year, that’s nearly 1 million touches on average—and 2 million for the less restrained among us,” the study said.

There is growing concern that as well as addicting users, technology is contributing to a condition called “continuous partial attention,” which severely limits people’s ability to focus and possibly lowers IQ.

People are now trying to refrain from using technology such as smartphones or computers in order to focus more on physical, face-to-face, social interactions. According to a survey on Digital Detox, 43 percent of survey respondents went on vacation in the last year with the intent to unplug. The most common motivations were being in the moment (69 percent) and stress relief (65 percent). The report, however, said more than half of respondents (52 percent) indicated that they spent at least an hour a day while on vacation using their connected devices.

Intuitively we may understand that the overwhelming exposure we have to social media, hyper-specific and -optimized advertisements, and mobile technology are affecting the ways we interact with one another. What may come less intuitively, however, is the growing body of knowledge that all the technology may be weakening our will power and making us lazy thinkers.

via Technology & Innovation Articles on Business 2 Community http://ift.tt/2yjqMbD