šŸ“š Understand More About RDBMS and NoSQL Through Analyzing PostgreSQL and MongoDB

Truong Phung - Oct 28 - - Dev Community

1. Quick Overview:

  • Relational Databases (RDBMS):

    • Examples: MySQL, PostgreSQL, Oracle.
    • Uses structured tables with rows and columns.
    • Supports SQL for querying.
    • Best for structured data and complex queries.
  • NoSQL Databases:

    • Examples: MongoDB, Redis, Cassandra.
    • Schema-less and flexible.
    • Suitable for unstructured or semi-structured data.
    • Best for high-velocity, large-volume data, like real-time analytics.

2. Key Concepts:

  • ACID Properties: Atomicity, Consistency, Isolation, Durability.
    • Important for ensuring data integrity in transactions.
  • BASE Properties: Basically Available, Soft State, Eventual Consistency.
    • Favoring scalability, high availability in system.
  • Normalization and Denormalization: Techniques to organize data model and retrieval.
  • Data Schema Migration: Involves modifying the structure of a database to accommodate changes in the data model
  • Indexing: Techniques to optimize query performance.
  • CAP Theorem: Consistency, Availability, Partition Tolerance.
    • Explains trade-offs in distributed systems.
  • Sharding and Replication: Methods for scaling databases horizontally and ensuring availability.

Now We will dive deeper in to these key concepts with PostgreSQL (RDBMS representative) and MongoDB (NoSQL representative)

3. ACID vs BASE

  • ACID (for RDBMS): ACID properties ensure reliable processing of database transactions, crucial for maintaining data integrity in relational databases:
    • Atomicity: A transaction is all or nothingā€”either it fully completes, or it rolls back entirely. This ensures that partial operations don't corrupt the database.
    • Consistency: A transaction moves the database from one valid state to another, maintaining predefined rules like constraints and triggers.
    • Isolation: Ensures that transactions occur independently without interfering with each other, preventing issues like dirty reads.
    • Durability: Once a transaction is committed, it remains persistent, even in case of system failures or crashes.
  • BASE (for NoSQL): BASE principles focus on availability and scalability over strict consistency, making them more suitable for distributed systems:

    • Basically Available: The system guarantees availability, ensuring that database operations are almost always possible, even during network partitioning.
    • Soft State: The state of the system might change over time without input due to eventual consistency. This allows the system to reach a consistent state eventually.
    • Eventual Consistency: Guarantees that, given enough time, all updates will propagate through the system, and all replicas will be consistent, but not immediately.
  • While ACID focuses on strong consistency, BASE trades some of that for better scalability and availability, which is often needed in distributed, high-traffic applications.

  • ACID in PostgreSQL: PostgreSQL adheres strictly to ACID properties, ensuring reliable transactions:

    • Atomicity: Transactions in PostgreSQL either complete fully or rollback if any error occurs, ensuring no partial updates.
    • Consistency: It enforces constraints (e.g., foreign keys, unique keys) and rules that maintain data integrity, ensuring each transaction leaves the database in a valid state.
    • Isolation: PostgreSQL supports various isolation levels (e.g., READ COMMITTED, SERIALIZABLE), allowing control over how transactions interact and preventing issues like dirty reads.
    • Durability: Uses Write-Ahead Logging (WAL) to ensure that committed transactions are written to disk, making data persistent even after a crash.
  • BASE in MongoDB: MongoDB follows BASE principles, offering flexibility and high availability:

    • Basically Available: MongoDB prioritizes availability over strict consistency, allowing queries to be processed even during system partitions.
    • Soft State: MongoDB can allow temporary inconsistencies since it doesn't require immediate consistency across all nodes.
    • Eventual Consistency: Changes in MongoDB will eventually propagate across all replicas, ensuring all nodes will sync up given enough time, making it suitable for distributed environments where immediate consistency is less critical.

4. Normalization vs Denormalization

1. Normalization in PostgreSQL:

  • Purpose: Normalization is a design process used to organize a relational database to reduce redundancy and improve data integrity.
    • Process: Data is divided into multiple tables, and relationships are established using keys (like foreign keys).
    • Benefits: Reduces data redundancy, ensures consistent data updates, and minimizes storage space. For example, storing customer information in one table and order information in another avoids duplicate customer data.
    • Trade-offs: It can lead to complex queries with multiple JOIN operations, which may impact performance, especially in read-heavy scenarios.

Example Scenario: A library database that stores information about books and authors.

  • Table 1: Authors with columns AuthorID and AuthorName.
  • Table 2: Books with columns BookID, Title, and AuthorID.
  • Each book is linked to its author using the AuthorID foreign key.
  • Benefits: If an author changes their name, itā€™s updated in the Authors table without needing to change every book record. This reduces redundancy.
  • Example Query: To get the author name of a book:

      SELECT Books.Title, Authors.AuthorName
      FROM Books
      JOIN Authors ON Books.AuthorID = Authors.AuthorID;
    

2. Denormalization in MongoDB:

  • Purpose: Denormalization is used to store related data within a single document, optimizing read performance by reducing the need for multiple queries.
  • Process: Related data is embedded within a single collection rather than being spread across multiple collections. For example, an order might include customer information directly within each order document.
  • Benefits: Enhances read performance by reducing the need for JOIN operations, making it ideal for applications that prioritize fast reads.
  • Trade-offs: Increases data redundancy, which can make updates more challenging, as changes to duplicated data must be reflected across multiple documents.

Example Scenario: A social media platform storing posts and comments.

  • A Posts collection where each document includes the post content along with an embedded array of comments.

      {
          "_id": "1",
          "title": "First Post",
          "content": "Hello world!",
          "comments": [
            { "user": "user1", "comment": "Great post!" },
            { "user": "user2", "comment": "Thanks for sharing!" }
          ]
       }
    

    Benefits: All the data needed to display a post along with its comments can be retrieved with a single query. This improves read performance.

     db.posts.find({ _id: "1" });
    

In normalization, data is split to ensure minimal redundancy, while in denormalization, data is stored together to make access faster at the expense of redundancy.

5. Data Schema Migration:

PostgreSQL

In PostgreSQL, schema migrations are more structured due to its relational nature. When making changes such as adding or removing tables, altering column types, or adding constraints, you typically use SQL commands like ALTER TABLE. Tools like Liquibase or Flyway help automate and version these changes, ensuring consistency across environments and avoiding data loss.

MongoDB

MongoDB isschemaless, meaning it does not enforce a fixed schema on its collections. However, as applications evolve, documents' structure may need adjustments (e.g., adding fields or changing data formats). Even though migrations aren't required like in SQL databases, they are still needed to standardize new data structures or backfill existing records. This can be done using scripts, the MongoDB shell, or migration tools like Mongoose for Node.js.

6. Indexing

1. Indexing in PostgreSQL:

  • Structure: PostgreSQL, like other RDBMSs, uses B-tree indexes by default, which organize data in a balanced tree structure. This allows for efficient data retrieval, especially with range queries.
  • Types: PostgreSQL supports various types of indexes like B-tree, Hash, GiST (Generalized Search Tree), and GIN (Generalized Inverted Index).
  • Usage: Indexes can be created on one or multiple columns, and they help speed up SELECT queries, but they add overhead for INSERT, UPDATE, and DELETE operations because the index must be updated as well.
  • Example:

     CREATE INDEX idx_users_email ON users (email);
    

2. Indexing in MongoDB:

  • Structure: MongoDB also uses B-tree-like data structures for its indexes. However, because MongoDB is a document-based NoSQL database, indexes are applied directly to fields within documents, which can be nested or array-based.
  • Types: MongoDB supports a variety of indexes, including single-field, compound (multiple fields), multi-key (for arrays), geospatial, and text indexes.
  • Usage: Indexes in MongoDB can improve query performance by reducing the number of documents scanned, but like RDBMS, they also come with a write-performance cost due to index updates during insertions or updates.
  • Example:

     db.users.createIndex({ email: 1 });
    
  • Use Case: Indexes are used to optimize query performance, especially for fields that are frequently filtered or sorted. In MongoDB, indexing fields in sub-documents or arrays is common due to the flexible schema.

Key Differences:

  • Schema Flexibility: PostgreSQL uses a fixed schema, so indexing is done on specific columns. MongoDB's schema flexibility allows for indexing within nested fields, arrays, or sub-documents.
  • Indexing Strategy: In PostgreSQL, strategies focus on relational data, while MongoDB's approach accommodates hierarchical and document-based data structures.
  • Query Optimization: Both databases use indexes to optimize queries, but MongoDB's query optimizer adapts to the flexible data model, making it more versatile for dynamic document structures.

Some Downsides of Indexing

While indexing is very important for optimizing query performance, it does come with some following downsides, so be mindful when using it.

  • Increased Storage Space: Indexes consume additional storage space on disk, which can become significant, especially for large datasets with multiple indexes.
  • Slower Write Performance: Every INSERT, UPDATE, or DELETE operation needs to update the corresponding indexes. This can slow down write operations as the database has to maintain consistency across all indexed fields.
  • Memory Usage: Indexes need to be kept in memory (cache) for optimal performance, which increases the memory usage. Large indexes may exceed available memory, leading to slower query performance.
  • Maintenance Overhead: Indexes require regular maintenance, such as reindexing or rebuilding, to ensure they remain efficient over time. This can add operational complexity, especially for large-scale databases.
  • Suboptimal Index Choices: Incorrectly designed indexes can lead to worse performance than no indexing at all, such as when an index is rarely used or when it causes the query optimizer to choose inefficient query plans.

7. CAP Theorem

The CAP Theorem (Consistency, Availability, and Partition Tolerance) describes the trade-offs that must be made when designing a distributed database system:

  1. Consistency: Every read receives the most recent write or an error. It ensures that all nodes in a distributed system see the same data at the same time.
  2. Availability: Every request (read or write) gets a response, even if it might not be the most recent version of the data. The system is operational despite some nodes being down.
  3. Partition Tolerance: The system continues to operate despite arbitrary message loss or failure of part of the system (network partitions). It means the system can handle communication breakdowns between nodes.

The theorem states that a distributed database can achieve at most two out of the three at any given time:

  • CP (Consistency + Partition Tolerance): Prioritizes data consistency, sacrificing availability during network issues. Example: traditional SQL databases like PostgreSQL in a clustered setup.
  • AP (Availability + Partition Tolerance): Prioritizes availability over consistency, allowing for temporary inconsistencies. Example: NoSQL databases like MongoDB.
  • CA (Consistency + Availability): Theoretically possible only when there is no network partition. Since network partitions are inevitable in distributed systems, CA is often unattainable in practice.

CP in PostgreSQL

In the context of the CAP theorem, PostgreSQL is considered CP (Consistency and Partition Tolerance) due to its focus on ensuring data consistency and its ability to handle network partitions, albeit with trade-offs in availability.

  1. Consistency
    • Strict consistency: PostgreSQL adheres to the ACID principles, ensuring that all transactions are applied in a consistent order. This means that once a transaction is committed, all subsequent reads will reflect the committed state.
    • Isolation levels: PostgreSQL provides various isolation levels (like Serializable, Repeatable Read, Read Committed) to maintain consistent data reads and writes, allowing it to manage concurrent transactions without sacrificing data accuracy.
  2. Partition Tolerance
    • Handling network partitions: PostgreSQL can handle scenarios where there are network issues between nodes in a distributed setup (e.g., using replication). However, during a partition, PostgreSQL might prioritize data consistency, which means that some parts of the database may become read-only or might refuse writes until the network is restored to maintain consistency.
    • Failover mechanisms: In a cluster setup with primary and replica nodes, PostgreSQL ensures that data is consistently replicated across nodes. If a partition occurs and a node is isolated, the system might restrict updates to avoid conflicts, ensuring that data remains in a consistent state when the network is restored.

Overall, PostgreSQLā€™s focus on data accuracy and integrity makes it suitable for systems where ensuring correct data is more critical than maintaining constant availability, aligning it with the CP aspect of the CAP theorem.

AP in MongoDB

In the context of the CAP theorem, MongoDB is considered an AP (Availability and Partition Tolerance) system, focusing on providing high availability even in the presence of network partitions.

  1. Availability

    • High Availability: MongoDB is designed to be always available. In scenarios where a partition occurs, MongoDB prioritizes serving requests to ensure that the system remains operational. This is particularly important for applications that require continuous access to data, such as web applications and services.
    • Replica Sets: MongoDB uses replica sets, which are groups of MongoDB servers that maintain the same dataset. If the primary node goes down, one of the secondary nodes can be automatically promoted to primary, ensuring that the database remains available for read and write operations.
  2. Partition Tolerance

    • Handling Partitions: In the event of a network partition, MongoDB will still allow writes to the available nodes. This can lead to situations where different nodes may have different versions of the data, a phenomenon known as "eventual consistency." When the network is restored, MongoDB uses mechanisms like conflict resolution to synchronize data across nodes.
    • Flexible Schema: MongoDB's schema-less design allows it to handle data in a way that is more tolerant to changes and can adapt to various data types and structures, which is advantageous during partitions.

In summary, MongoDB embraces the AP characteristics of the CAP theorem by ensuring that it remains available and operational during network partitions, while potentially sacrificing immediate consistency. This approach is suitable for applications where uptime is critical, and slight delays in data synchronization are acceptable.

8. Sharding and Replication:

PostgreSQL

In PostgreSQL, sharding and replication are two key techniques used to enhance database performance and availability.

  1. Sharding
    • Definition: Sharding is the process of splitting a large database into smaller, more manageable pieces called shards. Each shard contains a subset of the data and is hosted on a separate database server.
    • Purpose: The primary goal of sharding is to improve performance by distributing the load across multiple servers, allowing for parallel processing of queries and better handling of large datasets.
    • Implementation: While PostgreSQL does not provide built-in sharding capabilities, developers can implement sharding using partitioning strategies and middleware solutions like Citus or PgPool-II. Citus extends PostgreSQL by allowing you to distribute your data across multiple nodes, enabling horizontal scaling.
  2. Replication
    • Definition: Replication is the process of copying data from one PostgreSQL server (the primary) to one or more other servers (replicas or standbys). This ensures that multiple copies of the data exist across different servers.
    • Types: PostgreSQL supports several replication methods, including:
    • Streaming Replication: This involves continuously shipping changes from the primary server to the replicas in near real-time. It is primarily used for high availability and load balancing.
    • Logical Replication: Allows selective replication of specific tables or rows, providing more flexibility in what data is replicated and how it is consumed by other applications.
    • Purpose: The main benefits of replication are increased availability, disaster recovery, and load balancing for read operations.

Both sharding and replication in PostgreSQL are essential for building scalable and resilient applications. While sharding focuses on distributing data across multiple servers for performance, replication ensures data availability and fault tolerance.

MongoDB

In MongoDB, sharding and replication are essential features that enhance scalability and reliability. Hereā€™s a brief overview of each:

  1. Sharding

    • Definition: Sharding is the process of distributing data across multiple servers, known as shards. Each shard contains a portion of the dataset, allowing MongoDB to handle large volumes of data and traffic efficiently.
    • Purpose: The primary goal is to provide horizontal scalability by enabling the database to handle more data and higher throughput. Sharding helps balance the load across multiple nodes, which can improve performance for read and write operations.
    • Implementation: MongoDB automatically manages data distribution and balancing across shards. When a collection is sharded, MongoDB uses a shard key to determine how to distribute documents among the shards. This key is crucial, as it affects data access patterns and performance.
  2. Replication

    • Definition: Replication in MongoDB involves maintaining multiple copies of data across different servers (replica sets) to ensure data availability and fault tolerance.
    • Types: MongoDB supports:
    • Replica Sets: A group of MongoDB servers that maintain the same data set. One server acts as the primary (handling write operations), while others are secondaries (replicating data from the primary).
    • Automatic Failover: If the primary server goes down, one of the secondaries can be automatically elected as the new primary, ensuring continuous availability.
    • Purpose: The main benefits of replication include improved data redundancy, high availability, and read scalability since read operations can be distributed across secondary nodes.

Together, sharding and replication in MongoDB provide a robust framework for building scalable and resilient applications. Sharding enables horizontal scalability for large datasets, while replication ensures high availability and disaster recovery.

9. Database Security & Compliance

Importance of Securing Data

  • 1. Encryption: Protecting sensitive data at rest (stored data) and in transit (data being transmitted) is crucial for preventing unauthorized access. Encryption transforms data into a coded format, making it unreadable without the proper key or password. This is particularly important for compliance with regulations and protecting customer information.

  • 2. Access Control: Implementing strict access controls ensures that only authorized users can access or manipulate data. This can include role-based access control (RBAC), where permissions are granted based on a userā€™s role within the organization, helping to minimize the risk of data exposure.

  • 3. Regular Backups: Regular backups are essential to protect against data loss due to hardware failures, cyberattacks, or accidental deletions. Having a reliable backup strategy ensures that data can be restored quickly and accurately, reducing downtime and data loss.

Common Threats

  • 1 SQL Injection: This is one of the most prevalent web application security vulnerabilities. Attackers can exploit SQL injection by injecting malicious SQL code into queries, allowing them to manipulate or extract data from the database. To mitigate this risk, developers should use prepared statements and parameterized queries.

  • 2 Data Breaches: These occur when unauthorized individuals gain access to sensitive data. Breaches can result from various factors, including weak passwords, outdated software, or social engineering tactics. Organizations must monitor their systems for vulnerabilities and implement strong security protocols to reduce the risk.

Role of Compliance

  • 1. Regulations like GDPR and HIPAA: Compliance with regulations such as the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA) is critical for organizations that handle personal or health-related data. These regulations require businesses to implement strict data protection measures, including data encryption, user consent for data processing, and breach notification protocols.

  • 2. Impact on Database Management: Compliance impacts how databases are managed and secured. Organizations must adopt practices that ensure data privacy, accountability, and transparency, leading to increased operational overhead but ultimately fostering trust among customers and stakeholders.

Database security is a multifaceted domain that requires organizations to prioritize encryption, access control, and regular backups to protect against common threats like SQL injection and data breaches. Compliance with regulations like GDPR and HIPAA plays a crucial role in shaping data management practices and ensuring data privacy.

If you found this helpful, let me know by leaving a šŸ‘ or a comment!, or if you think this post could help someone, feel free to share it! Thank you very much! šŸ˜ƒ

. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player