Wednesday 31 October 2012

Security and Automation in the Cloud

Security and Automation in the Cloud

One of the biggest concerns IT managers have about moving business-critical applications into the cloud and away from the data center is the issue of security. What you may not realize, however, is that the same tools for automation and provisioning that function in cloud implementations also offer a significant opportunity to improve security.

Public vs. private clouds

The first concern, of course, is whether the cloud in question is private or public. If you’re using a public cloud provider such as Amazon Web Services, you know you’re looking at Level 1 PCI DSS compliance. If you’re running a cloud solution in-house, however, you need to make sure you’re handling all of that security via firewalls and network and storage management.

Public cloud solutions also require different security and automation models. You need to give more heed to firewalls, NAT, load balancers, and other related issues. This doesn’t mean public cloud solutions aren’t worth it, of course. They still provide increased efficiency, scalability, and even security.

Benefits of making provisioning automated

Cloud solutions that automate server configuration during the provisioning process improve security in cloud environments. You might have literally thousands of VMs, each of which would require individual setup and maintenance without automated provisioning. Automated provisioning reduces your costs, increases your agility, and creates a standardized environment that is less vulnerable to security issues than non-automated environments.

Virtualized, embedded security

The nature of a virtual machine is such that every security measure you place at the virtual server are naturally replicated. That means that, as you expand your use of a private cloud solution, you have the ability to automatically embed security measures with each new VM as it is created.

Management is the key here. If this aspect isn’t handled correctly, you can create a wide array of variant server images, each with different security measures in place. This creates something of a security nightmare.

Making use of automation in cloud solutions in order to make certain your servers are in compliance with all necessary security measures should be one of the key tasks your cloud computing staffs deals with on a regular basis.

Security Shortage? Look Internal.

Security Shortage? Look Internal.

There has been an increasing amount of commentary about the growing shortage of Information Security folks. While the reasons for this shortage are manifold and easily explained, that doesn’t change the fact that it exists. Nor the fact that natural sources may well be causing it to worsen.

Here’s why we’re where we are:

Information Security is a thankless job. Literally thankless. If you do enough to protect the organization, everyone hates you. If you don’t do enough to protect the organization, everyone hates you. Information Security is hard. Attacks are constantly evolving, and often sprung out of the blue. While protecting against three threats that the InfoSec professionals have ferreted out, a fourth blindsides them.

Information Security is complex. Different point, but similar to the one above. You can’t just get by in InfoSec. You have to know some seriously deep things, and be constantly learning.

Information Security is demanding. When the attackers come on a global clock, defenders have to be ready to respond on one. That means there are limits to “time off”, counting both a good nights’ sleep and vacations as casualties.

The shrinking pool has made the last point worse. With fewer people to share the load, there is more loads for each person to carry – more call, more midnight response, more everything.

Making do with the best security staff you can find may well be killing the rest of your InfoSec team. If “the best you can find” isn’t good enough, others must pick up the slack.

And those last two points are the introduction to today’s thought. Stop looking for the best InfoSec people you can find. Start training good internal employees in InfoSec. You all know this is the correct approach. No matter how good you are at Information Security, familiarity with the network or systems, or applications of your specific organization is at least as important. Those who manage the organizations’ IT assets know where the weaknesses are and can quickly identify new threats that pose a real risk to your data. The InfoSec needs of a bank, for example, are far better served by someone familiar with both banking and this bank than by someone who knows Information Security but learned all that they know at a dog pound. The InfoSec needs of the two entities are entirely different.

And there’s sense to this idea. You have a long history of finding good systems admins or network admins and training them in your organizations’ needs, but few organizations have a long history in hiring security folks and doing the same. With a solid training history and a deeper available talent pool, it just makes sense to find interested individuals within the organization and get them security training, backfilling their positions with the readily available talent out there.

Will it take time to properly vet and train those interested? Of course it will. Will it take longer than it would take to inform an InfoSec specialist in the intricacies of your environment? Probably not. SharePoint is SharePoint, and how to lock it down is well documented, but that app you had custom developed by a coding house that is now gone? That’s got a way different set of parameters.

Of course this option isn’t for everyone, but combined with automating what is safe to automate (which is certainly not everything, or even the proverbial lion’s share), you’ll have a stronger security posture in the long run, and these are people who already know your network – and perhaps more importantly your work environment - but have an interest in InfoSec. Give them a shot, you might be pleased with the results.

As to the bullet points above? You’ll have to address those long-term too. They’re why you’re struggling to find InfoSec people in the first place. Though some of them are out of your control, you can offer training and places like DefCon to minimize them.

Tuesday 30 October 2012

Global Cloud Index: Traffic to Grow six fold by 2016


Global Cloud Index: Traffic to Grow six fold by 2016
In the second annual Cisco Global Cloud Index (2011-2016), Cisco forecasts global data center traffic to grow fourfold and reach a total of 6.6 zettabytes annually by 2016. The company also predicts global cloud traffic, the fastest-growing component of data center traffic, to grow sixfold - a 44 percent combined annual growth rate (CAGR) - from 683 exabytes of annual traffic in 2011 to 4.3 zettabytes by 2016.
For context, 6.6 zettabytes is equivalent to:
92 trillion hours of streaming music - Equivalent to about 1.5 years of continuous music streaming for the world's population in 2016.
16 trillion hours of business Web conferencing - Equivalent to about 12 hours of daily Web conferencing for the world's workforce in 2016.
7 trillion hours of online high-definition (HD) video streaming - Equivalent to about 2.5 hours of daily streamed HD video for the world's population in 2016.
The vast majority of the data center traffic is not caused by end users but by data centers and cloud-computing workloads used in activities that are virtually invisible to individuals.
For the period 2011-2016, Cisco forecasts that roughly 76 percent of data center traffic will stay within the data center and will be largely generated by storage, production and development data. An additional 7 percent of data center traffic will be generated between data centers, primarily driven by data replication and software/system updates.
The remaining 17 percent of data center traffic will be fueled by end users accessing clouds for Web surfing, emailing and video streaming.
From a regional perspective, the Cisco Global Cloud Index predicts that through 2016, the Middle East and Africa will have the highest cloud traffic growth rate, while the Asia Pacific region will process the most cloud workloads, followed by North America.
Overview of the Latest Worldwide Market Study:
The Cisco Global Cloud Index (2011-2016) was developed to estimate global data center and cloud-based Internet Protocol (IP) traffic growth and trends. The Cisco Global Cloud Index serves as a complementary resource to existing network traffic studies, providing new insights and visibility into emerging trends affecting data centers and cloud architectures. The forecast becomes increasingly important as networks and data centers become more intrinsically linked in offering cloud services.
The Cisco Global Cloud Index includes a "workload transition" forecast, which shows the workload shifting from traditional data centers to more virtualized cloud servers.
The forecast also includes a supplement on Cloud Readiness Regional Details, which examines the fixed and mobile network abilities of each global region (from nearly 150 countries) to support business and consumer cloud-computing applications and services.
The Cisco Global Cloud Index is generated by modeling and analysis of various primary and secondary sources, including 40 terabytes of traffic data sampled from a variety of global data centers over the past year; results from more than 90 million network tests over the past two years; and third-party market research reports.
"As cloud traffic continues to proliferate in a new world of many clouds, the Cisco Global Cloud Index provides all cloud computing stakeholders with a very valuable barometer to make strategic, long-term planning decisions. This year's forecast confirms that strong growth in data center usage and cloud traffic are global trends, driven by our growing desire to access personal and business content anywhere, on any device. When you couple this growth with projected increases in connected devices and objects, the next-generation Internet will be an essential component to enabling much greater data center virtualization and a new world of interconnected clouds," said Doug Merritt, senior vice president, Corporate Marketing, Cisco Systems.
For further information visit: http://cloudcomputing.sys-con.com/node/2416832
Article No.2
Big Data Security for Apache Hadoop
Ten Tips for HadoopWorld attendees
Big Data takes center stage today at the Strata Conference & Hadoop World in New York, the world’s largest gathering of the Apache Hadoop™ community. A key conversation topic will be how organizations can improve data security for Hadoop and the applications that run on the platform. As you know, Hadoop and similar data stores hold a lot of promise for organizations to finally gain some value out of the immense amount of data they're capturing. But HDFS, Hive and other nascent NoSQL technologies were not necessarily designed with comprehensive security in mind. Often what happens as big data projects grow is sensitive data like HIPAA data, PII and financial records get captured and stored. It's important this data remains secure at rest.
I polled my fellow co-workers at Gazzang last week, and asked them to come up with a top ten list for securing Apache Hadoop. Here's what they delivered. Enjoy: Think about security before getting started – You don’t wait until after a burglary to put locks on your doors, and you should not wait until after a breach to secure your data. Make sure a serious data security discussion takes place before installing and feeding data into your Hadoop cluster.
Consider what data may get stored – If you are using Hadoop to store and run analytics against regulatory data, you likely need to comply with specific security requirements. If the stored data does not fall under regulatory jurisdiction, keep in mind the risks to your public reputation and potential loss of revenue if data such as personally identifiable information (PII) were breached.
Encrypt data at rest and in motion – Add transparent data encryption at the file layer as a first step toward enhancing the security of a big data project. SSL encryption can protect big data as it moves between nodes and applications.
As Securosis analyst Adrian Lane wrote in a recent blog, “File encryption addresses two attacker methods for circumventing normal application security controls. Encryption protects in case malicious users or administrators gain access to data nodes and directly inspect files, and it also renders stolen files or disk images unreadable. It is transparent to both Hadoop and calling applications and scales out as the cluster grows. This is a cost-effective way to address several data security threats.”
Store the keys away from the encrypted data – Storing encryption keys on the same server as the encrypted data is akin to locking your house and leaving the key in your front door. Instead, use a key management system that separates the key from the encrypted data.
CIO, CTO & Developer Resources
Institute access controls – Establishing and enforcing policies that govern which people and processes can access data stored within Hadoop is essential for keeping rogue users and applications off your cluster.
Require multi-factor authentication - Multi-factor authentication can significantly reduce the likelihood of an account being compromised or access to Hadoop data being granted to an unauthorized party. Use secure automation – Beyond data encryption, organizations should look to DevOps tools such as Chef or Puppet for automated patch and configuration management.
Frequently audit your environment – Project needs, data sets, cloud requirements and security risks are constantly changing. It’s important to make sure you are closely monitoring your Hadoop environment and performing frequent checks to ensure performance and security goals are being met.
Ask tough questions of your cloud provider – Be sure you know what your cloud provider is responsible for. Will they encrypt your data? Who will store and have access to your keys? How is your data retired when you no longer need it? How do they prevent data leakage?
Centralize accountability – Centralizing the accountability for data security ensures consistent policy enforcement and access control across diverse organizational silos and data sets.
Did we miss anything? If so, please comment below, and enjoy Strata +HadoopWorld.
For further information visit: http://cloudcomputing.sys-con.com/node/2416407
13)    DATA SECURITY/NETWORK SECURITY/CYBER SECURITY:
Article No.1
Northrop Grumman to Build Cyber Test Range in Australia
Northrop Grumman has been awarded a contract to build a cyber test range for the University of New South Wales (UNSW), Canberra campus at the Australian Defence Force Academy (ADFA) in Australia.
The ADFA is a partnership between the Australian Defence Force and the University of New South Wales. ADFA's primary mission is training and educating the future leaders of the Royal Australian Navy, Army and Air Force.
That training and education is supported by world-class research, including the burgeoning area of cyber research."We are proud to contribute our extensive cyber test range experience and capabilities to UNSW Canberra, at ADFA, and are looking forward to an enduring partnership," said Kathy Warden, vice president and general manager for Northrop Grumman's Cyber Intelligence division.
"This award reaffirms our dedication to providing our allies with best-value cybersecurity solutions, and our commitment to science, technology, engineering and mathematics education."
Northrop Grumman is an industry leader in all aspects of computer network operations and cybersecurity, offering customers innovative solutions to help secure the nation's cyber future. For more about cybersecurity at Northrop Grumman, go to www.northropgrumman.com/cybersecurity.
Article No.2
'Mini-Flame' virus hikes Mideast cyber war
Amid U.S. warnings about a potentially cataclysmic cyberattacks, with Iran the most likely culprit, cybersecurity experts say they've uncovered a new powerful espionage virus in the Middle East that's reserved for high-value targets.
The virus, used in recent attacks in Iran and Lebanon has been dubbed "Mini-Flame" by researchers at Moscow's Kaspersky Lab, a leading cybersecurity company, after the W32.Flame malware discovered earlier this year.
Flame and another new virus known as Gauss were used in a series of cyber attacks against targets in Iran recently, which Kaspersky claims come from the same "cyber-weapon factory" as these two variants, as well as the Stuxnet program used against Iran's nuclear program in 2009-10.
Lebanese banks that U.S. officials say are suspected of laundering money for Iran and Hezbollah, its powerful Lebanese proxy, have also been hit in recent weeks. This suggests that these viruses are the work of the U.S. and Israeli intelligence services, which at one time or another over the last three years have hit Iran's nuclear program, and more recently its oil industry, and that further cyberattacks are likely amid an armed confrontation in the Persian Gulf.
Stuxnet is widely believed to have been developed by Israeli and U.S. intelligence agencies, including Israel's super-secret Unit 8200, as part of their clandestine campaign to sabotage Tehran's uranium-enrichment program, allegedly aimed at developing nuclear weapons. Iran says its program is for peaceful purposes.
The New York Times reported in June that Stuxnet was part of a joint U.S.-Israeli cyber war operation codenamed Olympic Games directed against the Islamic Republic. The concern now is that the Iranians are driving to develop their own cyber weapons -- and recent evidence suggests they're well advanced -- to strike back against the United States and Israel in what Rear Adm. Samuel Cox, director of intelligence at the U.S. Cyber Command, calls "a global cyber arms race."
It's these fears, plus well-publicized attacks on Citigroup, Lockheed Martin and other U.S. companies, that led U.S. Defense Secretary Leon Panetta to warn Thursday that Iran could be preparing to launch a retaliatory major cyber attack on the United States.
Panetta did not specifically mention Iran as a threat in this regard. But he said the recent attacks on U.S. companies were probably "the most destructive attack that the private sector has seen to date."Tehran denied Sunday it was behind those cyberattacks.
Israel too has been the target of increasing cyber strikes. Prime Minister Binyamin Netanyahu told a cabinet meeting Sunday there has been "an escalation in attempts to carry out a cyber attack on Israel's computer infrastructures. There are daily attempts to break into Israeli systems."
Kaspersky's chief security specialist, Alexander Gostev, says the information-stealing Mini-Flame works in tandem with Flame and Gauss."If Flame and Gauss were massive cyber-espionage operations, infecting thousands of users, then Mini-Flame is a high-precision, surgical attack took," the Russian researchers concluded.
Mini-Flame, Kaspersky researchers say, is apparently reserved for attacks against high-value targets "having the greatest significance ... to the attackers."
Gotsev believes that Mini-Flame is designed to be used as a second-wave" of attack on targets already hit by W32.Flame or Gauss."Mini-Flame is a high-precision attack tool," he said. "After data is collected via Flame and reviewed, a potentially interesting victim is defined and identified, and Mini-Flame is installed in order to conduct mire in-depth surveillance and cyber espionage."
The Financial Times, which has called for urgent efforts by industrial, financial and commercial concerns to build defenses against cyber-attacks, said the discovery of Mini-Flame has raised fears "that researchers have only begun to scratch the surface of cyber warfare being waged" in the Middle East.
"The covert cyber war being waged in the Middle East and North Africa -- particularly against Iran and its allies -- is even more sophisticated and widespread then had previously been understood, according to new research," one informed Western source observed.
The recent intensification of cyber operations in the Middle East has heightened concerns that these could trigger military conflict in the region, particularly in the gulf."Next year will see the escalation of cyber weapons," Eugene Kaspersky, co-founder of Kaspersky Lab, told a recent conference in Dubai.

Monday 22 October 2012

ATI / AMD Architecture

Understanding ATI / AMD Architecture
 

Each ATI graphics cards will have Certain No of SIMD (Single Instruction Multiple Data) Cores

Each SIMD core will have 16 SPs (Stream Processors)

Each SP is Five way super Scalar Processor
 Each SP will have 4 mul-add units one Special Operation Unit

So If ATI card is having 10 SIMD Cores then

it can  do 10*16*5 operations per clock cycle (each SIMD is having 16 SPs and Each SP is having 5 ALUs)
 

Sunday 21 October 2012

Nvidia GPU Architecture

Nvidia GPU Architecture


Each Nvidia GPU will have Streaming Multiprocessors (SM)

 
Each SM will have 8 Stream processors(SP) , two tow Special Fucntion Units (SFU) and one double precision FPU.
 
•8 SPs
• 2 Special Function Units (SFUs)
         – 4 FP multiply units —transcendental operations (e.g. sin) and interpolation
• 64-bit double-precision FPU
• MT issue unit —dispatches instructions to SPs and SFUs.
• Cache
           – Very small instruction cache.
            – Read-only data cache
• 16KB read/write shared memory.
• Multi-threaded instruction dispatch
           – 1 to 1024 threads active
          – Shared instruction fetch per 32 threads
         – Cover latency of texture/memory loads
 
FLOPS
30 Streaming Multiprocessor
          – 8 SPs – 1 mad (2 ops) per cycle per SP
          – 2 SFUs – 4 mul per cycle per SFU
• SP can dual-issue MAD and MUL operations in conjunction with SFU
                 – perform 3 floating point operations per clock cycle
• 1476 MHz (GTX285) or 1296 MHz(GTX280) clock for SM functional units
• Flops = 30 SMs * 8 SPs * 3 Ops/cycle * 1476 MHz = 1063 Gflops
• Flops without Dual-issue = 30 SMs * 8 SPs * 2 Ops/cycle * 1476MHz = 709 GFlops
Double precision performance
30 Streaming Multiprocessor
– 1 double-precision FPU – 1 double mad (2 Ops) per cycle
• Flops = 30 SMs * 1 FPU * 2 Ops/cycle * 1476 MHz = 88 GFlops

 
 

Friday 19 October 2012

What is AMD gDEBugger?

What is AMD gDEBugger?


AMD gDEBugger is an OpenCL™ and OpenGL debugger and memory analyzer that is available as Microsoft® Visual Studio® plugin on Windows® and a standalone version on Linux®. gDEBugger provides the ability to debug OpenCL™ & OpenGL API calls and OpenCL™ kernels and step through the source code to find bugs, optimize performance and reduce memory consumption.

gDEBugger offers real-time OpenCL™ kernel debugging, which allows developers to step into the kernel execution directly from the API calls, debug inside the kernel, view all variable values across different work groups and work items - and all this on a single computer with a single GPU. In addition, gDEBugger takes the mystery out of debugging OpenCL™ and OpenGL, allowing developers to peer into compute and graphic memory objects, monitor their contents, and detect memory leaks and scenarios that caused it. Users can view and save the API call logs, find out the deprecated functions and see the recommended alternative function calls.


What's new in gDEBugger 6.2?


  • Introducing Linux® Support
  • New standalone user interface for both Linux® and Windows®, with enhancements for better navigation and ease of use
  • Supports OpenCL™ kernel and API level debugging on AMD Radeon HD 7000 series graphics cards
  • Supports OpenCL™ 1.2 beta drivers
  • Automatic updater to notify and download new product updates
  • Feature enhancements including support for static arrays, union variables and Find feature
  • Stability improvements

Monday 15 October 2012

GPGPU Comparisions

GP/GPU Comparisions 
 
  AMD FireStream 9250 NVIDIA Quadro FX 4800 NVIDIA Quadro 6000 ATI FirePro V9800 Tesla C2070
No of Cores 10 24 56 20 56
SIMDs per Core 16 8 8 16 8
mul_add units (2 flops) 5 1 1 5 1
mul units (1 flop) 0 1 0 0 0
clock Speed (GH) 0.75 1.204 1.15 0.85 1.15
Max GFLOPS (Cores*SIMDs*(mul-add*2+mul*1)*clock speed) 1200 693.504 1030.4 2720 1030
Global Memory (GB) 1 1.5 6 4 6
LOCAL Memory NIL 16 KB 16KB 16KB 16KB
Bandwidth     144 GB/s   144 GB/S
power consumption     225 W 225W 238W

How to Calculate FLOPS of GPU

How to Calculate FLOPS (floating point operations per second) of GPU

or How to Calculate GigaFlops of GPU

For this You should know

  1. Clock Speed of the GPU
  2. No of mul-add (ALU that can perform both mul and add in one clock second) units, nof of mul units
  3. No of SIMD units
  4. No of Cores

    Clock speed tells about how many instruction can be generated in one second

     In one clock tick one mul-add unit can perform one multiplication + one add so 2 (two) floating point operations per clock
    In one clock tick one mul unit can perform one multiplication so 1 floating point operation per clock tick
  
    Each SIMD unit will have these units and Each Core will have SIMD units so

    Floating point operations per second= no of cores * no of SIMD units * ((no of mul-add units*2) + ( no of mul units)) * clock speed in Hertz

if the clock speed is in GHz , result is Giga flops per second
Example :
AMD FireStream 9250 NVIDIA Quadro FX 4800 NVIDIA Quadro 6000 ATI FirePro V9800 Tesla C2070
No of Cores 10 24 56 20 56
SIMDs per Core 16 8 8 16 8
mul_add units (2 flops) 5 1 1 5 1
mul units (1 flop) 0 1 0 0 0
clock Speed (GH) 0.75 1.204 1.15 0.85 1.15
Max GFLOPS (Cores*SIMDs*(mul-add*2+mul*1)*clock speed) 1200 693.504 1030.4 2720 1030
Global Memory (GB) 1 1.5 6 4 6
LOCAL Memory NIL 16 KB 16KB 16KB 16KB
Bandwidth 144 GB/s 144 GB/S
power consumption 225 W 225W 238W


Above table also shows GPU Comparisions



Same formula applies for CPU also but SIMDs may not be there

Thursday 11 October 2012

GPGPU Hardware

 GPGPU Hardware
Graphics Processing Units (GPU’s) designed to accelerate graphics applications are highly parallel processors capable of 100’s of Gflops/sec. Competition among the GPU vendors for market share in the PC gaming market has driven technological advancements in graphics cards, and the volume in this market has driven prices down.

These processors can be applied to non-graphical high-performance computing applications. Researchers have been investigating this usage for several years with success in a number of areas. This new market for their technology has been recognized by the main graphics card vendors, Nvidia and AMD/ATI. Both have introduced product lines specifically targeting high-performance computing, sometimes called the GPGPU market for general purpose computing on graphics processing units.
 



Nvidia Tesla Products
 
Nvidia introduced the Tesla product line in 2007. The first Tesla card is called the C870 and is based on their high-end graphics card, the FX 5600, but lacks video outputs and, at $1,499, is priced at half the $2,999 FX 5600. Like graphics cards, the Tesla C870 requires a PCI-Express 16x slot. It also draws a lot of power, about 170w, and spacewise it takes up two slots and is full-length. An important limitation of the Nvidia GPU’s is that they only support single-precision floating-point arithmetic


 
Tesla C870 – 170w, 1.5GB, 518Gflops peak
 
 AMD FireStream

 
AMD made FireStream GPGPU boards available for early adopters and developers in 2007. In November 2007, they announced the FireStream 9170, intended for production use, with general availability expected in Q2 2008. The board is priced at $1,999, draws less than 100w, but is similar in size to the Tesla C870. Unlike the Nvidia GPU’s, the FireStream supports double-precision, but at an estimated102Gflops peak compared to the single-precision performance of 500Gflops peak.


 
AMD FireStream 9170 – 100w, 2GB, 500Gflops peak