This file provides documentation on Alembic migration directives.
The directives here are used within user-defined migration files, within the upgrade() and downgrade() functions, as well as any functions further invoked by those.
All directives exist as methods on a class called Operations. When migration scripts are run, this object is made available to the script via the alembic.op datamember, which is a proxy to an actual instance of Operations. Currently, alembic.op is a real Python module, populated with individual proxies for each method on Operations, so symbols can be imported safely from the alembic.op namespace.
The Operations system is also fully extensible. See Operation Plugins for details on this.
A key design philosophy to the Operation Directives methods is that to the greatest degree possible, they internally generate the appropriate SQLAlchemy metadata, typically involving Table and Constraint objects. This so that migration instructions can be given in terms of just the string names and/or flags involved. The exceptions to this rule include the add_column() and create_table() directives, which require full Column objects, though the table metadata is still generated here.
The functions here all require that a MigrationContext has been configured within the env.py script first, which is typically via EnvironmentContext.configure(). Under normal circumstances they are called from an actual migration script, which itself would be invoked by the EnvironmentContext.run_migrations() method.
Define high level migration operations.
Each operation corresponds to some schema migration operation, executed against a particular MigrationContext which in turn represents connectivity to a database, or a file output stream.
While Operations is normally configured as part of the EnvironmentContext.run_migrations() method called from an env.py script, a standalone Operations instance can be made for use cases external to regular Alembic migrations by passing in a MigrationContext:
from alembic.migration import MigrationContext
from alembic.operations import Operations
conn = myengine.connect()
ctx = MigrationContext.configure(conn)
op = Operations(ctx)
op.alter_column("t", "c", nullable=True)
Note that as of 0.8, most of the methods on this class are produced dynamically using the Operations.register_operation() method.
Construct a new Operations
Parameters: | migration_context¶ – a MigrationContext instance. |
---|
Issue an “add column” instruction using the current migration context.
e.g.:
from alembic import op
from sqlalchemy import Column, String
op.add_column('organization',
Column('name', String())
)
The provided Column object can also specify a ForeignKey, referencing a remote table name. Alembic will automatically generate a stub “referenced” table and emit a second ALTER statement in order to add the constraint separately:
from alembic import op
from sqlalchemy import Column, INTEGER, ForeignKey
op.add_column('organization',
Column('account_id', INTEGER, ForeignKey('accounts.id'))
)
Note that this statement uses the Column construct as is from the SQLAlchemy library. In particular, default values to be created on the database side are specified using the server_default parameter, and not default which only specifies Python-side defaults:
from alembic import op
from sqlalchemy import Column, TIMESTAMP, func
# specify "DEFAULT NOW" along with the column add
op.add_column('account',
Column('timestamp', TIMESTAMP, server_default=func.now())
)
Parameters: |
|
---|
Issue an “alter column” instruction using the current migration context.
Generally, only that aspect of the column which is being changed, i.e. name, type, nullability, default, needs to be specified. Multiple changes can also be specified at once and the backend should “do the right thing”, emitting each change either separately or together as the backend allows.
MySQL has special requirements here, since MySQL cannot ALTER a column without a full specification. When producing MySQL-compatible migration files, it is recommended that the existing_type, existing_server_default, and existing_nullable parameters be present, if not being altered.
Type changes which are against the SQLAlchemy “schema” types Boolean and Enum may also add or drop constraints which accompany those types on backends that don’t support them natively. The existing_type argument is used in this case to identify and remove a previous constraint that was bound to the type object.
Parameters: |
|
---|
Invoke a series of per-table migrations in batch.
Batch mode allows a series of operations specific to a table to be syntactically grouped together, and allows for alternate modes of table migration, in particular the “recreate” style of migration required by SQLite.
“recreate” style is as follows:
The directive by default will only use “recreate” style on the SQLite backend, and only if directives are present which require this form, e.g. anything other than add_column(). The batch operation on other backends will proceed using standard ALTER TABLE operations.
The method is used as a context manager, which returns an instance of BatchOperations; this object is the same as Operations except that table names and schema names are omitted. E.g.:
with op.batch_alter_table("some_table") as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
The operations within the context manager are invoked at once when the context is ended. When run against SQLite, if the migrations include operations not supported by SQLite’s ALTER TABLE, the entire table will be copied to a new one with the new specification, moving all data across as well.
The copy operation by default uses reflection to retrieve the current structure of the table, and therefore batch_alter_table() in this mode requires that the migration is run in “online” mode. The copy_from parameter may be passed which refers to an existing Table object, which will bypass this reflection step.
Note
The table copy operation will currently not copy CHECK constraints, and may not copy UNIQUE constraints that are unnamed, as is possible on SQLite. See the section Dealing with Constraints for workarounds.
Parameters: |
|
---|
New in version 0.7.0.
Parameters: | naming_convention¶ – a naming convention dictionary of the form described at Integration of Naming Conventions into Operations, Autogenerate which will be applied to the MetaData during the reflection process. This is typically required if one wants to drop SQLite constraints, as these constraints will not have names when reflected on this backend. Requires SQLAlchemy 0.9.4 or greater. New in version 0.7.1. |
---|
Note
batch mode requires SQLAlchemy 0.8 or above.
Issue a “bulk insert” operation using the current migration context.
This provides a means of representing an INSERT of multiple rows which works equally well in the context of executing on a live connection as well as that of generating a SQL script. In the case of a SQL script, the values are rendered inline into the statement.
e.g.:
from alembic import op
from datetime import date
from sqlalchemy.sql import table, column
from sqlalchemy import String, Integer, Date
# Create an ad-hoc table to use for the insert statement.
accounts_table = table('account',
column('id', Integer),
column('name', String),
column('create_date', Date)
)
op.bulk_insert(accounts_table,
[
{'id':1, 'name':'John Smith',
'create_date':date(2010, 10, 5)},
{'id':2, 'name':'Ed Williams',
'create_date':date(2007, 5, 27)},
{'id':3, 'name':'Wendy Jones',
'create_date':date(2008, 8, 15)},
]
)
When using –sql mode, some datatypes may not render inline automatically, such as dates and other special types. When this issue is present, Operations.inline_literal() may be used:
op.bulk_insert(accounts_table,
[
{'id':1, 'name':'John Smith',
'create_date':op.inline_literal("2010-10-05")},
{'id':2, 'name':'Ed Williams',
'create_date':op.inline_literal("2007-05-27")},
{'id':3, 'name':'Wendy Jones',
'create_date':op.inline_literal("2008-08-15")},
],
multiinsert=False
)
When using Operations.inline_literal() in conjunction with Operations.bulk_insert(), in order for the statement to work in “online” (e.g. non –sql) mode, the multiinsert flag should be set to False, which will have the effect of individual INSERT statements being emitted to the database, each with a distinct VALUES clause, so that the “inline” values can still be rendered, rather than attempting to pass the values as bound parameters.
New in version 0.6.4: Operations.inline_literal() can now be used with Operations.bulk_insert(), and the multiinsert flag has been added to assist in this usage when running in “online” mode.
Parameters: |
|
---|
Issue a “create check constraint” instruction using the current migration context.
e.g.:
from alembic import op
from sqlalchemy.sql import column, func
op.create_check_constraint(
"ck_user_name_len",
"user",
func.len(column('name')) > 5
)
CHECK constraints are usually against a SQL expression, so ad-hoc table metadata is usually needed. The function will convert the given arguments into a sqlalchemy.schema.CheckConstraint bound to an anonymous table in order to emit the CREATE statement.
Parameters: |
|
---|
Changed in version 0.8.0: The following positional argument names have been changed:
Issue a “create foreign key” instruction using the current migration context.
e.g.:
from alembic import op
op.create_foreign_key(
"fk_user_address", "address",
"user", ["user_id"], ["id"])
This internally generates a Table object containing the necessary columns, then generates a new ForeignKeyConstraint object which it then associates with the Table. Any event listeners associated with this action will be fired off normally. The AddConstraint construct is ultimately used to generate the ALTER statement.
Parameters: |
|
---|
Changed in version 0.8.0: The following positional argument names have been changed:
Issue a “create index” instruction using the current migration context.
e.g.:
from alembic import op
op.create_index('ik_test', 't1', ['foo', 'bar'])
Functional indexes can be produced by using the sqlalchemy.sql.expression.text() construct:
from alembic import op
from sqlalchemy import text
op.create_index('ik_test', 't1', [text('lower(foo)')])
New in version 0.6.7: support for making use of the text() construct in conjunction with Operations.create_index() in order to produce functional expressions within CREATE INDEX.
Parameters: |
|
---|
Changed in version 0.8.0: The following positional argument names have been changed:
Issue a “create primary key” instruction using the current migration context.
e.g.:
from alembic import op
op.create_primary_key(
"pk_my_table", "my_table",
["id", "version"]
)
This internally generates a Table object containing the necessary columns, then generates a new PrimaryKeyConstraint object which it then associates with the Table. Any event listeners associated with this action will be fired off normally. The AddConstraint construct is ultimately used to generate the ALTER statement.
Parameters: |
|
---|
Changed in version 0.8.0: The following positional argument names have been changed:
Issue a “create table” instruction using the current migration context.
This directive receives an argument list similar to that of the traditional sqlalchemy.schema.Table construct, but without the metadata:
from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column
from alembic import op
op.create_table(
'account',
Column('id', INTEGER, primary_key=True),
Column('name', VARCHAR(50), nullable=False),
Column('description', NVARCHAR(200)),
Column('timestamp', TIMESTAMP, server_default=func.now())
)
Note that create_table() accepts Column constructs directly from the SQLAlchemy library. In particular, default values to be created on the database side are specified using the server_default parameter, and not default which only specifies Python-side defaults:
from alembic import op
from sqlalchemy import Column, TIMESTAMP, func
# specify "DEFAULT NOW" along with the "timestamp" column
op.create_table('account',
Column('id', INTEGER, primary_key=True),
Column('timestamp', TIMESTAMP, server_default=func.now())
)
The function also returns a newly created Table object, corresponding to the table specification given, which is suitable for immediate SQL operations, in particular Operations.bulk_insert():
from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column
from alembic import op
account_table = op.create_table(
'account',
Column('id', INTEGER, primary_key=True),
Column('name', VARCHAR(50), nullable=False),
Column('description', NVARCHAR(200)),
Column('timestamp', TIMESTAMP, server_default=func.now())
)
op.bulk_insert(
account_table,
[
{"name": "A1", "description": "account 1"},
{"name": "A2", "description": "account 2"},
]
)
New in version 0.7.0.
Parameters: |
|
---|---|
Returns: | the Table object corresponding to the parameters given. New in version 0.7.0: - the Table object is returned. |
Changed in version 0.8.0: The following positional argument names have been changed:
Issue a “create unique constraint” instruction using the current migration context.
e.g.:
from alembic import op
op.create_unique_constraint("uq_user_name", "user", ["name"])
This internally generates a Table object containing the necessary columns, then generates a new UniqueConstraint object which it then associates with the Table. Any event listeners associated with this action will be fired off normally. The AddConstraint construct is ultimately used to generate the ALTER statement.
Parameters: |
|
---|
Changed in version 0.8.0: The following positional argument names have been changed:
Issue a “drop column” instruction using the current migration context.
e.g.:
drop_column('organization', 'account_id')
Parameters: |
|
---|
Drop a constraint of the given name, typically via DROP CONSTRAINT.
Parameters: |
|
---|
Changed in version 0.8.0: The following positional argument names have been changed:
Issue a “drop index” instruction using the current migration context.
e.g.:
drop_index("accounts")
Parameters: |
|
---|
Changed in version 0.8.0: The following positional argument names have been changed:
Issue a “drop table” instruction using the current migration context.
e.g.:
drop_table("accounts")
Parameters: |
|
---|
Changed in version 0.8.0: The following positional argument names have been changed:
Execute the given SQL using the current migration context.
In a SQL script context, the statement is emitted directly to the output stream. There is no return result, however, as this function is oriented towards generating a change script that can run in “offline” mode. For full interaction with a connected database, use the “bind” available from the context:
from alembic import op
connection = op.get_bind()
Also note that any parameterized statement here will not work in offline mode - INSERT, UPDATE and DELETE statements which refer to literal values would need to render inline expressions. For simple use cases, the inline_literal() function can be used for rudimentary quoting of string values. For “bulk” inserts, consider using bulk_insert().
For example, to emit an UPDATE statement which is equally compatible with both online and offline mode:
from sqlalchemy.sql import table, column
from sqlalchemy import String
from alembic import op
account = table('account',
column('name', String)
)
op.execute(
account.update().\
where(account.c.name==op.inline_literal('account 1')).\
values({'name':op.inline_literal('account 2')})
)
Note above we also used the SQLAlchemy sqlalchemy.sql.expression.table() and sqlalchemy.sql.expression.column() constructs to make a brief, ad-hoc table construct just for our UPDATE statement. A full Table construct of course works perfectly fine as well, though note it’s a recommended practice to at least ensure the definition of a table is self-contained within the migration script, rather than imported from a module that may break compatibility with older migrations.
Parameters: | sql¶ – Any legal SQLAlchemy expression, including: |
---|
Parameters: | execution_options¶ – Optional dictionary of execution options, will be passed to sqlalchemy.engine.Connection.execution_options(). |
---|
Indicate a string name that has already had a naming convention applied to it.
This feature combines with the SQLAlchemy naming_convention feature to disambiguate constraint names that have already had naming conventions applied to them, versus those that have not. This is necessary in the case that the "%(constraint_name)s" token is used within a naming convention, so that it can be identified that this particular name should remain fixed.
If the Operations.f() is used on a constraint, the naming convention will not take effect:
op.add_column('t', 'x', Boolean(name=op.f('ck_bool_t_x')))
Above, the CHECK constraint generated will have the name ck_bool_t_x regardless of whether or not a naming convention is in use.
Alternatively, if a naming convention is in use, and ‘f’ is not used, names will be converted along conventions. If the target_metadata contains the naming convention {"ck": "ck_bool_%(table_name)s_%(constraint_name)s"}, then the output of the following:
op.add_column(‘t’, ‘x’, Boolean(name=’x’))
will be:
CONSTRAINT ck_bool_t_x CHECK (x in (1, 0)))
The function is rendered in the output of autogenerate when a particular constraint name is already converted, for SQLAlchemy version 0.9.4 and greater only. Even though naming_convention was introduced in 0.9.2, the string disambiguation service is new as of 0.9.4.
New in version 0.6.4.
Return the current ‘bind’.
Under normal circumstances, this is the Connection currently being used to emit SQL to the database.
In a SQL script context, this value is None. [TODO: verify this]
Return the MigrationContext object that’s currently in use.
Register an implementation for a given MigrateOperation.
This is part of the operation extensibility API.
See also
Operation Plugins - example of use
Produce an ‘inline literal’ expression, suitable for using in an INSERT, UPDATE, or DELETE statement.
When using Alembic in “offline” mode, CRUD operations aren’t compatible with SQLAlchemy’s default behavior surrounding literal values, which is that they are converted into bound values and passed separately into the execute() method of the DBAPI cursor. An offline SQL script needs to have these rendered inline. While it should always be noted that inline literal values are an enormous security hole in an application that handles untrusted input, a schema migration is not run in this context, so literals are safe to render inline, with the caveat that advanced types like dates may not be supported directly by SQLAlchemy.
See execute() for an example usage of inline_literal().
The environment can also be configured to attempt to render “literal” values inline automatically, for those simple types that are supported by the dialect; see EnvironmentContext.configure.literal_binds for this more recently added feature.
Parameters: |
|
---|
Given a MigrateOperation, invoke it in terms of this Operations instance.
New in version 0.8.0.
Register a new operation for this class.
This method is normally used to add new operations to the Operations class, and possibly the BatchOperations class as well. All Alembic migration operations are implemented via this system, however the system is also available as a public API to facilitate adding custom operations.
New in version 0.8.0.
See also
Emit an ALTER TABLE to rename a table.
Parameters: |
---|
Modifies the interface Operations for batch mode.
This basically omits the table_name and schema parameters from associated methods, as these are a given when running under batch mode.
See also
Note that as of 0.8, most of the methods on this class are produced dynamically using the Operations.register_operation() method.
Construct a new Operations
Parameters: | migration_context¶ – a MigrationContext instance. |
---|
Issue an “add column” instruction using the current batch migration context.
See also
Issue an “alter column” instruction using the current batch migration context.
See also
Issue a “create check constraint” instruction using the current batch migration context.
The batch form of this call omits the source and schema arguments from the call.
See also
Changed in version 0.8.0: The following positional argument names have been changed:
Issue a “create foreign key” instruction using the current batch migration context.
The batch form of this call omits the source and source_schema arguments from the call.
e.g.:
with batch_alter_table("address") as batch_op:
batch_op.create_foreign_key(
"fk_user_address",
"user", ["user_id"], ["id"])
See also
Changed in version 0.8.0: The following positional argument names have been changed:
Issue a “create index” instruction using the current batch migration context.
See also
Issue a “create primary key” instruction using the current batch migration context.
The batch form of this call omits the table_name and schema arguments from the call.
See also
Issue a “create unique constraint” instruction using the current batch migration context.
The batch form of this call omits the source and schema arguments from the call.
Changed in version 0.8.0: The following positional argument names have been changed:
Issue a “drop column” instruction using the current batch migration context.
See also
Issue a “drop constraint” instruction using the current batch migration context.
The batch form of this call omits the table_name and schema arguments from the call.
See also
Changed in version 0.8.0: The following positional argument names have been changed:
Issue a “drop index” instruction using the current batch migration context.
See also
Changed in version 0.8.0: The following positional argument names have been changed:
base class for migration command and organization objects.
This system is part of the operation extensibility API.
New in version 0.8.0.
A dictionary that may be used to store arbitrary information along with this MigrateOperation object.