Thursday, 25 June 2015
Monday, 1 June 2015
Oracle Big Data
Architecture Overview
Oracle
Big Data Appliance is a high-performance, secure platform for running
diverse workloads on Hadoop and NoSQL systems. With Oracle Big Data SQL,
Oracle Big Data Appliance extends Oracle’s industry-leading
implementation of SQL to Hadoop and NoSQL systems. By combining the
newest technologies from the Hadoop ecosystem and powerful Oracle SQL
capabilities together on a single pre-configured platform, Oracle Big
Data Appliance is uniquely able to support rapid development of new Big
Data applications and tight integration with existing relational data.
Oracle Big Data Appliance is pre-configured for secure environments
leveraging Apache Sentry, Kerberos, both network encryption and
encryption at rest as well as Oracle Audit Vault and Database Firewall.
Oracle
Big Data SQL is an innovation from Oracle only available on Oracle Big
Data Appliance. It is a new architecture for SQL on Hadoop, seamlessly
integrating data in Hadoop and NoSQL with data in Oracle Database.
Oracle Big Data SQL radically simplifies integrating and operating in
the big data domain through two powerful features: newly expanded
External Tables and Smart Scan functionality on Hadoop.
Oracle
Big Data Appliance is sized to support pilot projects, and flexible
enough to grow with the needs of your business. Oracle Big Data
Appliance integrates tightly with Oracle Exadata and Oracle Database
using Big Data SQL and Oracle Big Data Connectors, seamlessly enabling
analysis of all data in the enterprise.
Friday, 22 May 2015
computer cluster
A computer cluster consists of a set of loosely or tightly connected computers that work together so that, in many respects, they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.
The components of a cluster are usually connected to each other through fast local area networks ("LAN"), with each node (computer used as a server) running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware and the same operating system, although in some setups (i.e. using Open Source Cluster Application Resources (OSCAR)), different operating systems can be used on each computer, and/or different hardware.
They are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.
Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high speed networks, and software for high-performance distributed computing.[citation needed] They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia. The applications that can be done however, are nonetheless limited, since the software needs to be purpose-built per task. It is hence not possible to use computer clusters for casual computing tasks
The components of a cluster are usually connected to each other through fast local area networks ("LAN"), with each node (computer used as a server) running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware and the same operating system, although in some setups (i.e. using Open Source Cluster Application Resources (OSCAR)), different operating systems can be used on each computer, and/or different hardware.
They are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.
Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high speed networks, and software for high-performance distributed computing.[citation needed] They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia. The applications that can be done however, are nonetheless limited, since the software needs to be purpose-built per task. It is hence not possible to use computer clusters for casual computing tasks
HDFS Hadoop Distributed File System #HDFS
The Hadoop Distributed File System (HDFS)
is a distributed file system
designed to run on commodity hardware. It has many similarities
with existing distributed file systems. However, the differences from
other distributed file systems are significant. HDFS is highly
fault-tolerant and is designed to be deployed on low-cost hardware.
HDFS provides high throughput access to application data and is
suitable for applications that have large data sets. HDFS relaxes
a few POSIX requirements to enable streaming access to file system
data. HDFS was originally built as infrastructure for the
Apache Nutch web search engine project. HDFS is now an Apache
Hadoop subproject.
The project URL is http://hadoop.apache.org/hdfs/.
MySQL Workbench 5.0 appears to run slowly. How can I increase performance?
MySQL Workbench 5.0 appears to run slowly. How can I increase
performance?
Although graphics rendering may appear slow, there are several other reasons why performance may be less than expected. The following tips may offer improved performance:
Although graphics rendering may appear slow, there are several other reasons why performance may be less than expected. The following tips may offer improved performance:
- Upgrade to the latest version. MySQL Workbench 5.0 is still being continually maintained and some performance-related issues may have been resolved.
- Limit the number of steps to save in the Undo History facility. Depending on the operations performed, having an infinite undo history can use a lot of memory after a few hours of work. In Tools, Options, General, enter a number in the range 10 to 20 into the Undo History Size spinbox.
- Disable relationship line crossing rendering. In large diagrams, there may be a significant overhead when drawing these line crossings. In Tools, Options, Diagram, uncheck the option named Draw Line Crossings.
- Check your graphics card driver. The GDI rendering that is used in MySQL Workbench 5.0 is not inherently slow, as most video drivers support hardware acceleration for GDI functions. It can help if you have the latest native video drivers for your graphics card.
- Upgrade to MySQL Workbench 5.1. MySQL Workbench 5.1 has had many operations optimized. For example, opening an object editor, such as the table editor, is much faster, even with a large model loaded. However, these core optimizations will not be back-ported to 5.0.
Subscribe to:
Comments (Atom)