Architecture

Database architecture consists of three levels, external, conceptual and internal. Clearly separating the three levels was a major feature of the relational database model that dominates 21st century databases.[2]

The external level defines how users understand the organization of the data. A single database can have any number of views at the external level. The internal level defines how the data is physically stored and processed by the computing system. Internal architecture is concerned with cost, performance, scalability and other operational matters. The conceptual is a level of indirection between internal and external. It provides a common view of the database that is uncomplicated by details of how the data is stored or managed, and that can unify the various external views into a coherent whole.[2]
Database management systems
Main article: Database management system

A database management system (DBMS) consists of software that operates databases, providing storage, access, security, backup and other facilities. Database management systems can be categorized according to the database model that they support, such as relational or XML, the type(s) of computer they support, such as a server cluster or a mobile phone, the query language(s) that access the database, such as SQL or XQuery, performance trade-offs, such as maximum scale or maximum speed or others. Some DBMS cover more than one entry in these categories, e.g., supporting multiple query languages. Examples of some commonly used DBMS are MySQL, PostgreSQL, Microsoft Access, SQL Server, FileMaker,Oracle,Sybase, dBASE, Clipper,FoxPro etc. Almost every database software comes with an Open Database Connectivity (ODBC) driver that allows the database to integrate with other databases.
Components of DBMS

Most DBMS as of 2009[update] implement a relational model.[3] Other DBMS systems, such as Object DBMS, offer specific features for more specialized requirements. Their components are similar, but not identical.
RDBMS components

* Sublanguages— Relational DBMS (RDBMS) include Data Definition Language (DDL) for defining the structure of the database, Data Control Language (DCL) for defining security/access controls, and Data Manipulation Language (DML) for querying and updating data.
* Interface drivers—These drivers are code libraries that provide methods to prepare statements, execute statements, fetch results, etc. Examples include ODBC, JDBC, MySQL/PHP, FireBird/Python.
* SQL engine—This component interprets and executes the DDL, DCL, and DML statements. It includes three major components (compiler, optimizer, and executor).
* Transaction engine—Ensures that multiple SQL statements either succeed or fail as a group, according to application dictates.
* Relational engine—Relational objects such as Table, Index, and Referential integrity constraints are implemented in this component.
* Storage engine—This component stores and retrieves data from secondary storage, as well as managing transaction commit and rollback, backup and recovery, etc.

ODBMS components

Object DBMS (ODBMS) has transaction and storage components that are analogous to those in an RDBMS. Some DBMS handle DDL, DML and update tasks differently. Instead of using sublanguages, they provide APIs for these purposes. They typically include a sublanguage and accompanying engine for processing queries with interpretive statements analogous to but not the same as SQL. Example object query languages are OQL, LINQ, JDOQL, JPAQL and others. The query engine returns collections of objects instead of relational rows.
Types
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (August 2010)
Operational database

These databases store detailed data about the operations of an organization. They are typically organized by subject matter, process relatively high volumes of updates using transactions. Essentially every major organization on earth uses such databases. Examples include customer databases that record contact, credit, and demographic information about a business' customers, personnel databases that hold information such as salary, benefits, skills data about employees, Enterprise resource planning that record details about product components, parts inventory, and financial databases that keep track of the organization's money, accounting and financial dealings.
Data warehouse

Data warehouses archive modern data from operational databases and often from external sources such as market research firms. Often operational data undergoes transformation on its way into the warehouse, getting summarized, anonymized, reclassified, etc. The warehouse becomes the central source of data for use by managers and other end-users who may not have access to operational data. For example, sales data might be aggregated to weekly totals and converted from internal product codes to use UPC codes so that it can be compared with ACNielsen data.Some basic and essential components of data warehousing include retrieving and analyzing data, transforming,loading and managing data so as to make it available for further use.
Analytical database

Analysts may do their work directly against, a data warehouse, or create a separate analytic database for Online Analytical Processing. For example, a company might extract sales records for analyzing the effectiveness of advertising and other sales promotions at an aggregate level.
Distributed database

These are databases of local work-groups and departments at regional offices, branch offices, manufacturing plants and other work sites. These databases can include segments of both common operational and common user databases, as well as data generated and used only at a user’s own site.
End-user database

These databases consist of data developed by individual end-users. Examples of these are collections of documents in spreadsheets, word processing and downloaded files, even managing their personal baseball card collection.
External database

These databases contain data collected for use across multiple organizations, either freely or via subscription. The Internet Movie Database is one example.
Hypermedia databases

The Worldwide web can be thought of as a database, albeit one spread across millions of independent computing systems. Web browsers "process" this data one page at a time, while web crawlers and other software provide the equivalent of database indexes to support search and other activities.
Models
Main article: Database model
Post-relational database models

Products offering a more general data model than the relational model are sometimes classified as post-relational.[4] Alternate terms include "hybrid database", "Object-enhanced RDBMS" and others. The data model in such products incorporates relations but is not constrained by E.F. Codd's Information Principle, which requires that

all information in the database must be cast explicitly in terms of values in relations and in no other way[5]

Some of these extensions to the relational model integrate concepts from technologies that pre-date the relational model. For example, they allow representation of a directed graph with trees on the nodes.

Some post-relational products extend relational systems with non-relational features. Others arrived in much the same place by adding relational features to pre-relational systems. Paradoxically, this allows products that are historically pre-relational, such as PICK and MUMPS, to make a plausible claim to be post-relational.
Object database models
Main article: Object database

In recent years[update], the object-oriented paradigm has been applied in areas such as engineering and spatial databases, telecommunications and in various scientific domains. The conglomeration of object oriented programming and database technology led to this new kind of database. These databases attempt to bring the database world and the application-programming world closer together, in particular by ensuring that the database uses the same type system as the application program. This aims to avoid the overhead (sometimes referred to as the impedance mismatch) of converting information between its representation in the database (for example as rows in tables) and its representation in the application program (typically as objects). At the same time, object databases attempt to introduce key ideas of object programming, such as encapsulation and polymorphism, into the world of databases.

A variety of these ways have been tried[by whom?] for storing objects in a database. Some products have approached the problem from the application-programming side, by making the objects manipulated by the program persistent. This also typically requires the addition of some kind of query language, since conventional programming languages do not provide language-level functionality for finding objects based on their information content. Others[which?] have attacked the problem from the database end, by defining an object-oriented data model for the database, and defining a database programming language that allows full programming capabilities as well as traditional query facilities.
Storage structures
Main article: Database storage structures

Databases may store relational tables/indexes in memory or on hard disk in one of many forms:

* ordered/unordered flat files
* ISAM
* heaps
* hash buckets
* logically-blocked files
* B+ trees

The most commonly used[citation needed] are B+ trees and ISAM.

Object databases use a range of storage mechanisms. Some use virtual memory-mapped files to make the native language (C++, Java etc.) objects persistent. This can be highly efficient but it can make multi-language access more difficult. Others disassemble objects into fixed- and varying-length components that are then clustered in fixed sized blocks on disk and reassembled into the appropriate format on either the client or server address space. Another popular technique involves storing the objects in tuples (much like a relational database) which the database server then reassembles into objects for the client.[citation needed]

Other techniques include clustering by category (such as grouping data by month, or location), storing pre-computed query results, known as materialized views, partitioning data by range (e.g., a data range) or by hash.

Memory management and storage topology can be important design choices for database designers as well. Just as normalization is used to reduce storage requirements and improve database designs, conversely denormalization is often used to reduce join complexity and reduce query execution time.[6]
Indexing
Main article: Index (database)

Indexing is a technique for improving database performance. The many types of index share the common property that they eliminate the need to examine every entry when running a query. In large databases, this can reduce query time/cost by orders of magnitude. The simplest form of index is a sorted list of values that can be searched using a binary search with an adjacent reference to the location of the entry, analogous to the index in the back of a book. The same data can have multiple indexes (an employee database could be indexed by last name and hire date.)

Indexes affect performance, but not results. Database designers can add or remove indexes without changing application logic, reducing maintenance costs as the database grows and database usage evolves.

Given a particular query, the DBMS' query optimizer is responsible for devising the most efficient strategy for finding matching data. The optimizer decides which index or indexes to use, how to combine data from different parts of the database, how to provide data in the order requested, etc.

Indexes can speed up data access, but they consume space in the database, and must be updated each time the data is altered. Indexes therefore can speed data access but slow data maintenance. These two properties determine whether a given index is worth the cost.
Transactions
Main article: Database transaction
This section may stray from the topic of the article into the topic of another article, Database management system. Please help improve this section or discuss this issue on the talk page. (November 2010)

As every software system, a DBMS operates in a faulty computing environment and prone to failures of many kinds. A failure can corrupt the respective database unless special measures are taken to prevent this. A DBMS achieves certain levels of fault tolerance by encapsulating in database transactions units of work (executed programs) performed upon the respective database.
The ACID rules
Main article: ACID

Most DBMS provide some form of support for transactions, which allow multiple data items to be updated in a consistent fashion, such that updates that are part of a transaction succeed or fail in unison. The so-called ACID rules, summarized here, characterize this behavior:

* Atomicity: Either all the data changes in a transaction must happen, or none of them. The transaction must be completed, or else it must be undone (rolled back).
* Consistency: Every transaction must preserve the declared consistency rules for the database.
* Isolation: Two concurrent transactions cannot interfere with one another. Intermediate results within one transaction must remain invisible to other transactions. The most extreme form of isolation is serializability, meaning that transactions that take place concurrently could instead be performed in some series, without affecting the ultimate result.
* Durability: Completed transactions cannot be aborted later or their results discarded. They must persist through (for instance) DBMS restarts.

In practice, many DBMSs allow the selective relaxation of these rules to balance perfect behavior with optimum performance.
Concurrency control and locking
Main article: Concurrency control

Concurrency control is essential for the correctness of transactions executed concurrently in a DBMS, which is the common execution mode for performance reasons. The main concern and goal of concurrency control is isolation.
Isolation

Isolation refers to the ability of one transaction to see the results of other transactions. Greater isolation typically reduces performance and/or concurrency, leading DBMSs to provide administrative options to reduce isolation. For example, in a database that analyzes trends rather than looking at low-level detail, increased performance might justify allowing readers to see uncommitted changes ("dirty reads".)

A common way to achieve isolation is by locking. When a transaction modifies a resource, the DBMS stops other transactions from also modifying it, typically by locking it. Locks also provide one method of ensuring that data does not change while a transaction is reading it or even that it doesn't change until a transaction that once read it has completed.
Lock types

Locks can be shared[7] or exclusive, and can lock out readers and/or writers. Locks can be created implicitly by the DBMS when a transaction performs an operation, or explicitly at the transaction's request.

Shared locks allow multiple transactions to lock the same resource. The lock persists until all such transactions complete. Exclusive locks are held by a single transaction and prevent other transactions from locking the same resource.

Read locks are usually shared, and prevent other transactions from modifying the resource. Write locks are exclusive, and prevent other transactions from modifying the resource. On some systems, write locks also prevent other transactions from reading the resource.

The DBMS implicitly locks data when it is updated, and may also do so when it is read. Transactions explicitly lock data to ensure that they can complete without complications. Explicit locks may be useful for some administrative tasks.[8][9]

Locking can significantly affect database performance, especially with large and complex transactions in highly concurrent environments.
Lock granularity

Locks can be coarse, covering an entire database, fine-grained, covering a single data item, or intermediate covering a collection of data such as all the rows in a RDBMS table.
Deadlocks

Deadlocks occur when two transactions each require data that the other has already locked exclusively. Deadlock detection is performed by the DBMS, which then aborts one of the transactions and allows the other to complete.
Replication
Main article: Database replication

Database replication involves maintaining multiple copies of a database on different computers, to allow more users to access it, or to allow a secondary site to immediately take over if the primary site stops working. Some DBMS piggyback replication on top of their transaction logging facility, applying the primary's log to the secondary in near real-time. Database clustering is a related concept for handling larger databases and user communities by employing a cluster of multiple computers to host a single database that can use replication as part of its approach.[10][11]
Security
Main article: Database security

Database security denotes the system, processes, and procedures that protect a database from unauthorized activity.

DBMSs usually enforce security through access control, auditing, and encryption:

* Access control manages who can connect to the database via authentication and what they can do via authorization.
* Auditing records information about database activity: who, what, when, and possibly where.
* Encryption protects data at the lowest possible level by storing and possibly transmitting data in an unreadable form. The DBMS encrypts data when it is added to the database and decrypts it when returning query results. This process can occur on the client side of a network connection to prevent unauthorized access at the point of use.

Confidentiality

Law and regulation governs the release of information from some databases, protecting medical history, driving records, telephone logs, etc.

In the United Kingdom, database privacy regulation falls under the Office of the Information Commissioner. Organizations based in the United Kingdom and holding personal data in digital format such as databases must register with the Office.[12]
See also

* Comparison of relational database management systems
* Comparison of database tools
* Data hierarchy
* Database design
* Database theory
* Database-centric architecture
* Datastructure
* Document-oriented database
* Government database
* In-memory database
* Real time database
* Web database


BlueConic Unveils Free Tool to Help Marketers Determine Readiness for Customer Data Platforms
Business Wire (press release)
“As marketers derive greater value from customer data modeling efforts and resulting personalization initiatives, the challenge to unify and manage customer data is more pressing than ever,” according to Gartner authors in the July 2017 Hype Cycle for ...

and more »


NASA honors 'Hidden Figure' Katherine Johnson with a new facility in her name
Miami Herald
The 37,000-square-foot facility an “energy efficient” structure worth $23-million at the Langley Research Center in Hampton, Virginia. “The facility will enhance NASA's efforts in modeling and simulation, big data, and analysis,” according to the release.

and more »


Risiko casino online spielen kostenlos - Seven heaven slots - Tai game ae blackjack online
Energy Priorities Magazine
improvement customer from times the lower agencies boundaries set longer agencies data traffic quality desirable; General Bewildered, a of money). offer fee-for-service or for state them synchronize should federal, state Agriculture cost was each who ...

and more »


Quartz

Everything, including the growing income disparity, can be explained by physics
Quartz
He's a cheerful and weird engineering professor at Duke University who wants us all to be free thinkers, to question convention and authority, and to connect the dots across topics. He himself has tried to do ... The conclusion is based on data from ...



Seymour Tribune

Focus on finding cause of gender discrimination outside free markets
Seymour Tribune
Thus, there may be more to the story than discrimination. So, if we have data on hundreds of people, we could use fairly straightforward statistical models to determine which of these things mattered most. This approach may also tell us what doesn't ...



MasterCard (MA) Presents at Deutsche Bank Technology Conference (Transcript)
Seeking Alpha
Had an IPO in 1996, packaged software for data modeling and design and it's relational data warehouses come out. So I have been in that mix on the technology side for a long time. Had a first wave internet payments company, Paytrust, now part of Intuit.

and more »


From the Editor's Bookshelf: My Favorite Titles for Data Science and Machine Learning
insideBIGDATA
As a practicing data scientist, I've spent years building up my library of academic and practical resources that I routinely draw upon for helping me do my work. Although my library is vast, I have a select group of books that occupy a prominent ...



Washington Post

The CBO can't score Republicans' health-care plan in time. That's where Jimmy Kimmel comes in.
Washington Post
The complexity of gathering data and modeling likely policy choices in 50 states meant that the CBO took until May 24 to provide an estimate of the amended House bill. They projected that 51 million people under 65 would be uninsured, compared to 28 ...

and more »


Burns & McDonnell Completes Expanded Power Asset Testing and Research Lab
Markets Insider
Completed for $100,000 at the firm's World Headquarters in Kansas City, the Center serves as an equipment testing and demonstration resource free of charge for current clients. With a surge of ... The Center will enable Burns & McDonnell to work ...

and more »


Spread of Zika linked to how much time people spend outside
Science Daily
Their modeling then revealed that this heterogeneity -- compared to a hypothetical population in which everyone spent the same, average amount of time outside -- leads Zika virus to infect fewer people, but spread at a faster pace between people.

and more »

Google News