In-Memory OLTP SQL Server Secrets: Do You Really Need It?

When SQL Server 2014 was launched in April 2014, it brought with it a much talked about bonus. Alongside the standard relational engine and its traditional disk-based tables, SQL Server 2014 includes In-Memory OLTP, a high performance, memory-optimized engine that allows users to create data in memory-optimized tables that reside permanently in-memory.

This is not just another “DBCC PINTABLE”: memory-optimized tables and indexes are completely new data structures, and very different from their disk-based counterparts. SQL Server takes no locks or latches on these structures during reading or writing, and they have the potential to improve online transaction processing performance significantly. As well as providing tables and indexes on which the in-memory engine has to acquire no latches or locks during concurrent access, the new engine also provides natively compiled stored procedures.

Into to In-Memory OLTP in SQL Server

In-Memory OLTP is integrated with SQL Server to provide a seamless experience in all areas such as development, deployment, manageability, and supportability. A database can contain in-memory as well as disk-based objects.

Rows in memory-optimized tables are versioned. This means that each row in the table potentially has multiple versions. All row versions are maintained in the same table data structure. Row versioning is used to allow concurrent reads and writes on the same row. 

Transact-SQL code can be migrated to natively-compiled stored procedures, which are essentially machine code that can be executed immediately by the CPU, without the need for any further compilation or interpretation. While only a limited subset are supported at the moment, these natively-compiled procedures generate far fewer CPU instructions for a given task than the equivalent interpreted T-SQL stored procedure.

All of which gives organizations the opportunity to create high speed and scalable enterprise and web-scale applications and databases.

So what’s the promise?

In-Memory OLTP SQL Server is based on four pillars, each of which contributes an important element to the whole.

  • With tables in memory rather than on disk, the time to access those tables is considerably reduced, leading to higher performance.
  • Because T-SQL is compiled to machine code, natively compiled stored procedures use fewer instructions, so business logic processing is considerably faster.
  • Optimistic multi-version concurrency control removes the need for the latches and locks that are necessary in disk-based tables to ensure transactional integrity, but can cause contention and blocking for highly concurrent workloads. Instead, when rows are modified, new versions are created that use time-stamps for validation to ensure full ACID support.
  • In-Memory OLTP is fully integrated with SQL Server and is managed with a similar set of tools.

The end result of these four pillars is that using In-Memory OLTP, and memory-optimized tables, can be a lot faster than disk-based tables, and yet the two can happily co-exist in the same database and a client application can access data in both without needing to be aware of this fact.

For in-memory tables, Added Benefits are the way SQL Server handles the latches and locks. According to Microsoft, the engine uses an optimistic approach for this, meaning it does not place locks or latches on any version of updated rows of data, which is very different than normal tables. It’s this mechanism that reduces contention and allows the transactions to process exponentially faster. Instead of locks, In-Memory uses Row Versions, keeping the original row until after the transaction is committed. Much like Read Committed Snapshot Isolation (RCSI), this allows other transactions to read the original row, while updating the new row version. The In-Memory structured version is pageless and optimized for speed inside active memory, giving a significant performance impact depending on workloads.

SQL Server also changes its logging for these tables. Instead of fully logging, this duality of both on disk and in memory versions (row versions) of the table allows less to be logged. SQL Server can use the before and after versions to gain information it would normally acquire from a log file. In SQL Server 2019, the same concept applies to the new Accelerated Data Recovery (ADR) approach to logging and recovery.

Where are those pitfalls?

These potential increases in performance do, however, come at a cost. Quite simply, the challenge of making in-memory tables work has meant that some features of disk-based tables just aren’t available.

Functions like replication, mirroring, and database snapshots are not supported, for example, and the data in in-memory tables is limited to 250 GB per server.

Similarly, In-Memory OLTP SQL Server does not recognize statements such as ALTER TABLE, CREATE INDEX, DROP INDEX and ALTER INDEX. This means that we cannot simply alter existing disk-based tables to become memory-optimized tables, and once we’ve created a memory-optimized table, we can’t subsequently alter its structure or add more indexes.

FOREIGN KEYS and CHECK CONSTRAINTS have also been shelved in order to improve performance, as have many classic T-SQL operators and functionalities such as OR, LIKE, BETWEEN, and OUTER JOIN. This poses probably the biggest challenge because it restricts what developers can do in a stored procedure, seriously limiting business logic.

Finally, disk-based tables cannot be accessed from natively compiled stored procedures, reducing the performance gains when accessing disk-based tables alongside In-Memory memory tables. 

Features not supported for in memory tables to keep in mind –

  • Replication
  • Mirroring
  • Linked Servers
  • Bulk Logging
  • DDL Triggers
  • Minimal Logging
  • Change Data Capture
  • Data Compression
  • Foreign Keys (Can only reference other Memory Optimized Table PKs)
  • ALTER TABLE
  • CREATE INDEX
  • TRUNCATE TABLE
  • DBCC CHECKTABLE
  • DBCC CHECKDB

Scenarios Not Supported with In-Memory OLTP SQL Server

  • Accessing memory-optimized tables by using the context connection from inside CLR stored procedures.
  • Keyset and dynamic cursors on queries accessing memory-optimized tables. These cursors are degraded to static and read-only.
  • Using MERGE INTO target, where target is a memory-optimized table, is unsupported.
    • MERGE USING source is supported for memory-optimized tables.
  • The ROWVERSION (TIMESTAMP) data type is not supported.
  • Auto-close is not supported with databases that have a MEMORY_OPTIMIZED_DATA filegroup
  • Transactional DDL, such as CREATE/ALTER/DROP of In-Memory OLTP objects, is not supported inside user transactions.
  • Event notification.
  • Policy-based management (PBM).
    • Prevent and log only modes of PBM are not supported. Existence of such policies on the server may prevent In-Memory OLTP DDL from executing successfully. On demand and on schedule modes are supported.
  • Database containment (Contained Databases) is not supported with In-Memory OLTP.
    • Contained database authentication is supported. However, all In-Memory OLTP objects are marked as breaking containment in the dynamic management view (DMV) dm_db_uncontained_entities.

What are the prerequisites to start using memory-optimized tables?

In order to support Memory-Optimized Tables (MOT), there are some requirements and steps to follow.

Starting with the basics, we’ll need a 64-bit Developer or Enterprise version of SQL Server. In-Memory OLTP feature was introduced on SQL Server 2014, so we must install SQL 2014 or later. If we are planning to use IMOLTP on Azure SQL Database, we need to make sure that we’re running at least V12.

The IMOLTP feature is not a separate component. As soon as we have the SQL Server Engine installed, we can start using the feature.

It is important to have enough available memory as well. But what do we mean by “enough”? In order to provide the right amount of memory, we need to estimate the size of the data that will be stored in-memory. Unfortunately, this is not the only factor to take into account when sizing the server memory. Because of the memory-needs of the row-versioning strategy used for Memory-Optimized Tables, and in order to support table changes, Microsoft recommends that at least double the estimated table size is available.

The memory provisioning is not that straightforward: remember that if we have other databases or tables in the same server, whether or not they are using the In-memory OLTP feature, we will need to provide memory for those components as well. If the system is sharing its resources, we will need to take that into account.

Recommended Memory = ( Estimated MOT Data Size x 2 ) + Estimated Memory for other components

It is a very good idea to enable the Resource Governor in order to control the memory consumption of the MOTs. This is not mandatory, but can help avoiding server-wide “low memory” problems.

Another requirement is related to disk space. In fact the disk is still being used; the recommendation is to have, at least double the estimated size for MOT available as free disk space.
Still on the subject of the disks, the old and well-tried recommendations are still valid: it is best to have a dedicated disk for this purpose, enable Instant File Initialization (IFI), and don’t forget ‘The faster is the disk, the better will be the performance’.

Memory-optimized Table – two different approaches

There are two different types of MOTs, with different requirements and characteristics: Durable and Non-Durable tables.

Both Durable and Non-Durable MOTs are fully operating in memory. This means that all the table rows, whether active or not, will be in-memory all the time once the SQL Server Engine process is running. All the read and write operations are being done directly in the server’s memory.

The main difference between them is related to the data lifecycle, and the way that the rows are managed. This will result in different performance results, but there are always some compensations.

The RAM memory is a very efficient way to store data, but it is volatile. This means that if we lose power and the server shuts down, all the data stored in memory will be lost. That is why we have two different types of MOT.

By default, if a MOT is created without an explicit reference of its kind, a Durable MOT will be created. This kind of table has a similar behavior to the traditional disk-based tables, keeping the data even if the server, or the instance, is restarted.
But how does it work?

In order to ensure that the data stored in the memory will be available in case of a SQL Server instance restart, or if an unfortunate crash happens, all the operations targeting the Durable MOTs are logged and stored in the disk. As we can imagine, both the logging and saving the data to disk are expensive operations, but in order to minimize the performance impact, a delayed durability strategy can be used: the user executes the command and will receive a “success” message, however, the data will be saved to disk afterwards. This strategy has a performance benefit, but won’t assure that our data will be safe if our server crashes during this process. In summary, the default MOT type is Durable, which keeps the data even if the server is restarted, but there are some disadvantages related to performance.

On SQL Server 2014, there’s a “soft” limitation of 256 GB per database for Durable MOTs. This means that the sum of the data contained on all the Durable MOTs cannot exceed this amount. From SQL Server 2016, this recommendation was increased to 2 TB per database. This is not a hard limit, just a recommendation given by Microsoft, based on their tests.

The Other Type

The other type of Memory-Optimized Table is “Non-Durable”. As its name says, this is a volatile table. Because of this characteristic, there’s no need to either have operations logged or to save the data to the disk, which will eliminate the need for and disk access. This kind of table offers the best performance benefit, with the penalty of data loss if the server crashes or if the instance/server is restarted.

Both Durable and Non-Durable MOTs are supported on databases with traditional disk-based tables. It is possible to create both Durable and Non-Durable MOTs in the same database.

It is important to mention that, on the first version of In-Memory OLTP in SQL Server 2014, a MOT table could not be changed after its creation, meaning that simple actions such as creating new columns, changing data types or tuning indexes are not possible in that version. From SQL Server 2016 the ALTER TABLE statement is supported, bringing much more flexibility to databases with MOTs. Be aware that while the change is being processed, SQL Server will require double the amount of memory that is currently being used by the table in order to succeed. SQL Server 2016 also introduced the possibility of use Foreign Keys and Nullable columns.

Memory-optimized Table Variable – What is that?

Taking advantage of Memory-Optimized Table Type (MOT Type), another structure compiled and optimized to work in memory, it is possible to create a Memory-Optimized Table Variable (MOT Variable).

A MOT Variable is similar to a traditional Table Variable, the main difference is that on this kind of object the disk is not used, just the memory. This way, there’s no TempDB access or any sort of disk I/O.

Another advantage is that MOT Variables are using the same structure of MOTs, which are highly optimized to work in-memory. The memory allocated to MOT Variables is freed up as soon as the variable is out of scope, even the invalid rows.

Native Compilation

On In-Memory OLTP, the “Native Compilation” is the process of converting the T-SQL code into native code, which brings the benefit of a more efficient data access and query execution, since – unlike traditional SQL Server objects – the code is no longer interpreted during the execution phase.

Both Memory-Optimized Tables and Memory-Optimized Table Types are compiled, resulting in a DLL file. Those DLLs are recompiled on every database or instance restart, based on the metadata stored in the database.

Row structure and Indexes

Pages and Extents are part of the structure of traditional disk based tables. The rows for this kind of table are identified by page number and offset. This is not true of Memory-Optimized Tables.

In order to provide a better efficiency, the rows for MOTs are organized differently, as below:

In-Memory OLTP SQL Server

In order to allow concurrent reads and writes, rows from MOTs are versioned, resulting in the following structure for the Row Header:

structure of row header

The Begin and End Timestamp is used to control the “Row Visibility”. The Begin Timestamp contains the time of the creation of the row. Once the End timestamp is set, the row is interpreted as “removed”. It is important to understand that a removed row will still be stored in memory for a while. Once the Garbage Collector runs and there are no more active transactions with a time prior to the one set as End Timestamp, the row will be cleared from memory. Notice that a removed row can be generated not only from DELETE statements, but also from UPDATE statements. Under the hood, an UPDATE is doing an INSERT and a DELETE, because it is more efficient to create a new row with the modified data than to update the current one. Aborted INSERT transactions are also a possible cause for deleted rows.

The Statement ID is the unique ID value for the statement that created the row. The “IndexLink Count” is a number indicating how many indexes are referencing the row.

The remaining slots are used to store index pointers, which are linking the rows of a particular table. That is the reason why every memory-optimized table must have at least one index.

In MOTs, rows are stored in memory heaps and, unlike disk-based tables, the data of a single table is not held in close proximity. This means that the rows are likely to be spread in memory, without a logical order.

There are two distinct general areas: the Header and the Payload. As mentioned, the row structure for a MOT is different from disk-based tables in order to achieve the best performance by taking advantage of the table being memory-based.

The Payload area contains the table columns, but what about the row header? Let’s take a deeper look:

  • Begin and End Timestamp – In short, those 8-byte fields are going to store, respectively, the timestamp for the transaction that created the row and the transaction that deleted the row. But this is not that simple, there are some important details:
    • The timestamp is that point in time when the transaction finishes. For this reason, both the Begin and End Timestamps are temporarely holding the “Global Transaction ID”, which is a unique incremental value that identifies the transaction. The “Global Transaction ID” is recycled when the instance is restarted.
    • When a row is created, there’s (obviously) no “End Timestamp”, this way the infinity (â) value is set.
    • Not only does a DELETE statement remove a row, but so also does an UPDATE. When a row is changed, a DELETE followed by an INSERT is performed under the hood.
  • Statement ID – This is a 4-byte field that stores the unique ID of the statement that generated the row.
  • IndexLink Count – As its name suggests, this 2-byte field stores the number of Indexes referencing this row. Every table must have at least one index, which is required in order to link its rows.
  • Pointers – The remaining portion of the row header is filled with “Index Pointers”. The number of pointers is the same as the number of indexes referencing the row, which is tracked by the “IndexLink Count” field. In SQL Server 2014, a table could not be modified, and this was also true of its indexes; it wasn’t possible to change, remove or add indexes to a MOT. The reason behind this is that index pointers were part of the row structure. Happily, this limitation was removed on SQL Server 2016.

Memory-Optimized File Group

In order to create Memory-Optimized Objects, it is a requirement that a Memory-Optimized filegroup (MOFG) has already been created in the instance.

A MOFG is a kind of filegroup based on FILESTREAM, and its role is to hold containers in which the durable Memory-Optimized Tables will store their data. Even if you are planning to just use Non-Durable MOTs, the existence of a MOFG is still required.

A MOFG can have several containers where Checkpoint Files will be stored. Those kind of files are used to keep track of the data for all durable tables.

Some facts about MOFG:

  • Even though they are based on FILESTREAM, there’s no need to enable this in order to create a MOFG.
  • A database can have only one MOFG.
  • A MOFG may hold multiple containers, which is a pointer to an OS folder.
    • From SQL Server 2016, each container have its own Offline Ceckpoint Thread, allowing a better scalability when persisting the data.
    • It is a good practice, in terms of I/O, to have the containers created in different disks.

Let’s talk deployment

One of the biggest issues with In-Memory tables is deploying them initially, and then deploying subsequent changes later on. Both have become longer, more complicated tasks, even with a deployment tool like SQL Compare.

When deploying an In-Memory table, for example, memory-optimized objects cannot be deployed inside a transaction. Instead, a two-step process is called for where they are deployed without transactions first, before deploying the remaining objects with transactions.

Similarly, memory-optimized tables and natively compiled stored procedures cannot be deployed if DDL triggers and event notifications are being used to monitor certain schema operations, something which is fairly common in audited environments. Instead, the DDL triggers and event notifications need to be removed and recreated once the In-Memory OLTP objects are deployed, with a subsequent impact on auditing procedures.

If a change to the schema or index is required, the same problem arises. That’s because no changes can be made to a memory-optimized table once it’s created. Without support for statements like ALTER TABLE and ALTER INDEX, the table has to be rebuilt – again, using a two-step process.

Performance and Scalability

The following factors will affect the performance gains that can be achieved with In-Memory OLTP:

Communication: An application with many calls to short stored procedures may see a smaller performance gain compared to an application with fewer calls and more functionality implemented in each stored procedure.

Transact-SQL Execution: In-Memory OLTP achieves the best performance when using natively compiled stored procedures rather than interpreted stored procedures or query execution. There can be a benefit to accessing memory-optimized tables from such stored procedures.

Range Scan vs Point Lookup: Memory-optimized nonclustered indexes support range scans and ordered scans. For point lookups, memory-optimized hash indexes have better performance than memory-optimized nonclustered indexes. Memory-optimized nonclustered indexes have better performance than disk-based indexes.

  • Starting in SQL Server 2016, the query plan for a memory-optimized table can scan the table in parallel. This improves the performance of analytical queries.
    • Hash indexes also became scannable in parallel in SQL Server 2016.
    • Nonclustered indexes also became scannable in parallel in SQL Server 2016.
    • Columnstore indexes have been scannable in parallel since their inception in SQL Server 2014.

Index operations: Index operations are not logged, and they exist only in memory.

Concurrency: Applications whose performance is affected by engine-level concurrency, such as latch contention or blocking, improves significantly when the application moves to In-Memory OLTP.

What are The Use Cases?

Typical instances of businesses that could benefit include those with a high data insert rate such as power companies recording a constant stream of information from smart meters; websites with a large number of users accessing the same data from, say, online magazines; or website gaming platforms where low latency is business-critical.

For those who want, perhaps need, those performance advantages, the journey to In-Memory OLTP is not a straightforward migration. It is a worthwhile one, however, provided the deployment issues are understood first.