Skip to content

Latest commit

 

History

History
1186 lines (939 loc) · 69.3 KB

CHANGELOG.md

File metadata and controls

1186 lines (939 loc) · 69.3 KB

3.12.0

Migration Notes - Datastax Drivers:

The Datastax drivers have been moved to Version 4, this adds support for many new features with the caveat that the configuration file format must be changed. In Version 4, the Datastax standard configuration file format and properties are in the HOCON format. They are used to configure the driver.

Sample HOCON:

MyCassandraDb {
  preparedStatementCacheSize=1000
  keyspace=quill_test
  
  session {
    basic.contact-points = [ ${?CASSANDRA_CONTACT_POINT_0}, ${?CASSANDRA_CONTACT_POINT_1} ]
    basic.load-balancing-policy.local-datacenter = ${?CASSANDRA_DC}
    basic.request.consistency = LOCAL_QUORUM
    basic.request.page-size = 3
  }

}

The session entry values and keys are described in the datastax documentation: Reference configuration

The ZioCassandraSession constructors:

 val zioSessionLayer: ZLayer[Any, Throwable, Has[CassandraZioSession]] =
  CassandraZioSession.fromPrefix("MyCassandraDb")
run(query[Person])
  .provideCustomLayer(zioSessionLayer)

Additional parameters can be added programmatically:

 val zioSessionLayer: ZLayer[Any, Throwable, Has[CassandraZioSession]] =
  CassandraZioSession.fromContextConfig(LoadConfig("MyCassandraDb").withValue("keyspace", ConfigValueFactory.fromAnyRef("data")))
run(query[Person])
  .provideCustomLayer(zioSessionLayer)

session.queryOptions.fetchSize=N config entry should be replaced by basic.request.page-size=N

testStreamDB {
  preparedStatementCacheSize=1000
  keyspace=quill_test
  
  session {
    ...
    basic.request.page-size = 3
  }
  ...
}

Migration Notes - Query Log File:

Production of the query-log file queries.txt has been disabled by default due to issues with SBT and metals. In order to use it, launch the compiler JVM (e.g. SBT) with the argument -Dquill.log.file=my_queries.sql or set the quill_log_file environment variable (e.g. export quill_log_file=my_queries.sql).

Migration Notes - Monix:

The monix context wrapper MonixJdbcContext.Runner has been renamed to MonixJdbcContext.EffectWrapper. The type Runner needs to be used by ProtoQuill to define quill-context-specific execution contexts.

3.11.0

Migration Notes:

All ZIO JDBC context run methods have now switched from have switched their dependency (i.e. R) from Has[Connection] to Has[DataSource]. This should clear up many innocent errors that have happened because how this Has[Connecction] is supposed to be provided was unclear. As I have come to understand, nearly all DAO service patterns involve grabbing a connection from a pooled DataSource, doing one single crud operation, and then returning the connection back to the pool. The new JDBC ZIO context memorialize this pattern.

  • The signature of QIO[T] has been changed from ZIO[Has[Connection], SQLException, T] to ZIO[Has[DataSource], SQLException, T]. a new type-alias QCIO[T] (lit. Quill Connection IO) has been introduced that represents ZIO[Has[Connection], SQLException, T].

  • If you are using the .onDataSource command, migration should be fairly easy. Whereas previously, a usage of quill-jdbc-zio 3.10.0 might have looked like this:

    object MyPostgresContext extends PostgresZioJdbcContext(Literal); import MyPostgresContext._
    val zioDS = DataSourceLayer.fromPrefix("testPostgresDB")
    
    val people = quote {
      query[Person].filter(p => p.name == "Alex")
    }
    
    MyPostgresContext.run(people).onDataSource
      .tap(result => putStrLn(result.toString))
      .provideCustomLayer(zioDs)

    In 3.11.0 simply remove the .onDataSource in order to use the new context.

    object MyPostgresContext extends PostgresZioJdbcContext(Literal); import MyPostgresContext._
    val zioDS = DataSourceLayer.fromPrefix("testPostgresDB")
    
    val people = quote {
      query[Person].filter(p => p.name == "Alex")
    }
    
    MyPostgresContext.run(people)  // Don't need `.onDataSource` anymore
      .tap(result => putStrLn(result.toString))
      .provideCustomLayer(zioDs)
  • If you are creating a Hikari DataSource directly, passing of the dependency is now also simpler. Instead having to pass the Hikari-pool-layer into DataSourceLayer, just provide the Hikari-pool-layer directly.

    From this:

    def hikariConfig = new HikariConfig(JdbcContextConfig(LoadConfig("testPostgresDB")).configProperties)
    def hikariDataSource: DataSource with Closeable = new HikariDataSource(hikariConfig)
    
    val zioConn: ZLayer[Any, Throwable, Has[Connection]] =
      Task(hikariDataSource).toLayer >>> DataSourceLayer.live
    
    
    MyPostgresContext.run(people)
      .tap(result => putStrLn(result.toString))
      .provideCustomLayer(zioConn)

    To this:

    def hikariConfig = new HikariConfig(JdbcContextConfig(LoadConfig("testPostgresDB")).configProperties)
    def hikariDataSource: DataSource with Closeable = new HikariDataSource(hikariConfig)
    
    val zioDS: ZLayer[Any, Throwable, Has[DataSource]] =
      Task(hikariDataSource).toLayer // Don't need `>>> DataSourceLayer.live` anymore!
    
    MyPostgresContext.run(people)
      .tap(result => putStrLn(result.toString))
      .provideCustomLayer(zioConn)
  • If you want to provide a java.sql.Connection to a ZIO context directly, you can still do it using the underlying variable.

    object Ctx extends PostgresZioJdbcContext(Literal); import MyPostgresContext._
    Ctx.underlying.run(qr1)
      .provide(zio.Has(conn: java.sql.Connection))
    
  • Also, when using an underlying context, you can still use onDataSource to go from a Has[Connection] dependency back to a Has[DataSource] dependency (note that it no longer has to be with Closable).

      object Ctx extends PostgresZioJdbcContext(Literal); import MyPostgresContext._
      Ctx.underlying.run(qr1)
        .onDataSource
        .provide(zio.Has(ds: java.sql.DataSource))
    
  • Finally, that the prepare methods have been unaffected by this change. They still require a Has[Connection] and have the signature ZIO[Has[Connection], SQLException, PreparedStatement]. This is because in order to work with the result of this value (i.e. to work with PreparedStatement), the connection that created it must still be open.

3.10.0

Migration Notes:

No externally facing API changes have been made. This release aligns Quill's internal Context methods with the API defined in ProtoQuill and introduces a root-level context (in the quill-sql-portable module) that will be shared together with ProtoQuill. Two arguments info: ExecutionInfo and dc: DatasourceContext have been introduced to all execute___ and prepare___ methods. For Scala2-Quill, these arguments should be ignored as they contain no relevant information. ProtoQuill uses them in order to pass Ast information as well as whether the query is Static or Dynamic into execute and prepare methods. In the future, Scala2-Quill may be enhanced to use them as well.

3.9.0

Migration Notes:

This release modifies Quill's core encoding DSL however this is very much an internal API. If you are using MappedEncoder, which should be the case for most users, you will be completely unaffected. The MappedEncoder signatures remain the same.

Quill's core encoding API has changed:

// From:
type BaseEncoder[T] = (Index, T, PrepareRow) => PrepareRow
type BaseDecoder[T] = (Index, ResultRow) => T
// To:
type BaseEncoder[T] = (Index, T, PrepareRow, Session) => PrepareRow
type BaseDecoder[T] = (Index, ResultRow, Session) => T

That means that internal signature of all encoders has also changed. For example, the JdbcEncoder has changed:

// From:
case class JdbcEncoder[T](sqlType: Int, encoder: BaseEncoder[T]) extends BaseEncoder[T] {
  override def apply(index: Index, value: T, row: PrepareRow) =
    encoder(index + 1, value, row)
}
// To:
case class JdbcEncoder[T](sqlType: Int, encoder: BaseEncoder[T]) extends BaseEncoder[T] {
  override def apply(index: Index, value: T, row: PrepareRow, session: Session) =
    encoder(index + 1, value, row, session)
}

If you are writing encoders that directly implement BaseEncoder, they will have to be modified with an additional session: Session parameter.

The actual type that Session is will vary. For JDBC this will be Connection, for Cassandra this will be some implementation of CassandraSession, for other systems that use a entirely different session paradigm this will just be Unit.

Again, if you are using MappedEncoders for all of your custom encoding needs, you will not be affected by this change.

3.8.0

Migration Notes:

The quill-jdbc-zio contexts' .run method was designed to work with ZIO in an idiomatic way. As such, the environment variable of their return type including the zio.blocking.Blocking dependency. This added a significant amount of complexity. Instead of ZIO[Has[Connection], SQLException, T], the return type became ZIO[Has[Connection] with Blocking, SQLException, T]. Instead of ZIO[Has[DataSource with Closeable], SQLException, T], the return type became ZIO[Has[DataSource with Closeable] with Blocking, SQLException, T]. Various types such as QConnection and QDataSource were created in order to encapsulate these concepts but this only led to additional confusion. Furthermore, actually supplying a Connection or DataSource with Closeable required first peeling off the with Blocking clause, calling a .provide, and then appending it back on. The fact that a Connection needs to be opened from a Data Source (which will typically be a Hikari connection pool) further complicates the problem because this aforementioned process needs to be done twice. All of leads to the clear conclusion that the with Blocking construct has bad ergonomics. For this reason, the ZIO team has decided to drop the concept of with Blocking in ZIO 2 altogether.

As a result of this, I have decided to drop the with Blocking construct in advance. Quill queries resulting from the run(qry) command and still run on the blocking pool but with Blocking is not included in the signature. This also means that and the need for QConnection and QDataSource disappears since they are now just Has[Connection] and Has[Datasource with Closeable] respectively. This also means that all the constructors on the corresponding objects e.g. QDataSource.fromPrefix("myDB") are not consistent with any actual construct in QIO, therefore they are not needed either.

Instead, I have introduced a simple layer-constructor called DataSourceLayer which has a .live implementation which converts ZIO[Has[Connection], SQLException, T] to ZIO[Has[DataSource with Closeable], SQLException, T] by taking a connection from the data-source and returning it immediately afterward, this is the analogue of what QDataSource.toConnection use to do. You can use it like this:

def hikariDataSource: DataSource with Closeable = ...
val zioConn: ZLayer[Any, Throwable, Has[Connection]] = 
  Task(hikariDataSource).toLayer >>> DataSourceLayer.live
run(people)
  .provideCustomLayer(zioConn)

You can also use the extension method .onDataSource (or .onDS for short) to do the same thing:

def hikariDataSource: DataSource with Closeable = ...
run(people)
  .onDataSource
  .provide(Has(hikariDataSource))

Also, constructor-methods fromPrefix, fromConfig, fromJdbcConfig and fromDataSource are available on DataSourceLayer to construct instances of ZLayer[Has[DataSource with Closeable], SQLException, Has[Connection]]. Combined with the toDataSource construct, these provide a simple way to construct various Hikari pools from a corresponding typesafe-config file application.conf.

run(people)
  .onDataSource
  .provideLayer(DataSourceLayer.fromPrefix("testPostgresDB"))

Also note that the objects QDataSource and QConnection have not yet been removed. Instead, all of their methods have been marked as deprecated and a comment on what calls using DataSourceLayer/onDataSource to use instead have been added.

Cassandra:

Similar changes have been made in quill-cassandra-zio. Has[CassandraZioSession] with Blocking has been replaced with just Has[CassandraZioSession] so now this is much easier to provide:

val session: CassandraZioSession = _
run(people)
  .provide(Has(session))

The ZioCassandraSession constructors however are all still fine to use:

 val zioSessionLayer: ZLayer[Any, Throwable, Has[CassandraZioSession]] =
   ZioCassandraSession.fromPrefix("testStreamDB")
run(query[Person])
  .provideCustomLayer(zioSessionLayer)

3.7.2

3.7.1

3.7.0

Migration Notes: In order to properly accommodate a good ZIO experience, several refactorings had to be done to various internal context classes, none of these changes modify class structure in a breaking way.

The following was done for quill-jdbc-zio

  • Query Preparation base type definitions have been moved out of JdbcContextSimplified into JdbcContextBase which inherits a class named StagedPrepare which defines prepare-types (e.g. type PrepareQueryResult = Session => Result[PrepareRow]).
  • This has been done so that the ZIO JDBC Context can define prepare-types via the ZIO R parameter instead of a lambda parameter (e.g. ZIO[QConnection, SQLException, PrepareRow] a.k.a. QIO[PrepareRow]).
  • In order prevent user-facing breaking changes. The contexts in BaseContexts.scala now extend from both JdbcContextSimplified (indirectly) and JdbcContextBase thus preserving the Session => Result[PrepareRow] prepare-types.
  • The context JdbcContextSimplified now contains the prepareQuery/Action/BatchAction methods used by all contexts other than the ZIO contexts which define these methods independently (since they use the ZIO R parameter).
  • All remaining context functionality (i.e. the run(...) series of functions) has been extracted out into JdbcRunContext which the ZIO JDBC Contexts in ZioJdbcContexts.scala as well as all the other JDBC Contexts now extend.

Similarly for quill-cassandra-zio

  • The CassandraSessionContext on which the CassandraMonixContext and all the other Cassandra contexts are based on keeps internal state (i.e. session, keyspace, caches).
  • This state was pulled out as separate classes e.g. SyncCache, AsyncFutureCache (the ZIO equivalent of which is AsyncZioCache).
  • Then a CassandraZioSession is created which extends these state-containers however, it is not directly a base-class of the CassandraZioContext.
  • Instead it is returned as a dependency from the CassandraZioContext run/prepare commands as part of the type ZIO[Has[ZioCassandraSession] with Blocking, Throwable, T] (a.k.a CIO[T]). This allows the primary context CassandraZioContext to be stateless.

3.6.1

Migration Notes:

  • Memoization of Quats should improve performance of dynamic queries based on some profiling analysis. This change should not have any user-facing changes.

3.6.0

This description is an aggregation of the 3.6.0-RC1, RC2 and RC3 as well as several new items.

Migration Notes:

  • The Cassandra base UDT class io.getquill.context.cassandra.Udt has been moved to io.getquill.Udt.

  • When working with databases which do not support boolean literals (SQL Server, Oracle, etc...) infixes representing booleans will be converted to equality-expressions.

    For example:

    query[Person].filter(p => infix"isJoe(p.name)".as[Boolean])
    // SELECT ... FROM Person p WHERE isJoe(p.name)
    // Becomes> SELECT ... FROM Person p WHERE 1 = isJoe(p.name)
    

    This is because the aforementioned databases not not directly support boolean literals (i.e. true/false) or expressions that yield them.

    In some cases however, it is desirable for the above behavior not to happen and for the whole infix statement to be treated as an expression. For example

    query[Person].filter(p => infix"${p.age} > 21".as[Boolean])
    // We Need This> SELECT ... FROM Person p WHERE p.age > 21
    // Not This> SELECT ... FROM Person p WHERE 1 = p.age > 21
    

    In order to have this behavior, instead of infix"...".as[Boolean], use infix"...".asCondition.

    query[Person].filter(p => infix"${p.age} > 21".asCondition)
    // We Need This> SELECT ... FROM Person p WHERE p.age > 21
    

    If the condition represents a pure function, be sure to use infix"...".pure.asCondition.

3.6.0-RC3

3.6.0-RC2

Migration Notes:

  • When working with databases which do not support boolean literals (SQL Server, Oracle, etc...) infixes representing booleans will be converted to equality-expressions.

    For example:

    query[Person].filter(p => infix"isJoe(p.name)".as[Boolean])
    // SELECT ... FROM Person p WHERE isJoe(p.name)
    // Becomes> SELECT ... FROM Person p WHERE 1 = isJoe(p.name)
    

    This is because the aforementioned databases not not directly support boolean literals (i.e. true/false) or expressions that yield them.

    In some cases however, it is desirable for the above behavior not to happen and for the whole infix statement to be treated as an expression. For example

    query[Person].filter(p => infix"${p.age} > 21".as[Boolean])
    // We Need This> SELECT ... FROM Person p WHERE p.age > 21
    // Not This> SELECT ... FROM Person p WHERE 1 = p.age > 21
    

    In order to have this behavior, instead of infix"...".as[Boolean], use infix"...".asCondition.

    query[Person].filter(p => infix"${p.age} > 21".asCondition)
    // We Need This> SELECT ... FROM Person p WHERE p.age > 21
    

    If the condition represents a pure function, be sure to use infix"...".pure.asCondition.

  • This realease is not binary compatible with any Quill version before 3.5.3.

  • Any code generated by the Quill Code Generator with quote { ... } blocks will have to be regenerated with this Quill version if generated before 3.5.3.

  • In most SQL dialects (i.e. everything except Postgres) boolean literals and expressions yielding them are not supported so statements such as SELECT foo=bar FROM ... are not supported. In order to get equivalent logic, it is necessary to user case-statements e.g.

    SELECT CASE WHERE foo=bar THEN 1 ELSE 0`.

    On the other hand, in a WHERE-clause, it is the opposite:

    SELECT ... WHERE CASE WHEN (...) foo ELSE bar`

    is invalid and needs to be rewritten. Naively, a 1= could be inserted:

    SELECT ... WHERE 1 = (CASE WHEN (...) foo ELSE bar)

    Note that this behavior can disabled via the -Dquill.query.smartBooleans switch when issued during compile-time for compile-time queries and during runtime for runtime queries.

    Additionally, in certain situations, it is far more preferable to express this without the CASE WHEN construct:

    SELECT ... WHERE ((...) && foo) || !(...) && foo

    This is because CASE statements in SQL are not sargable and generally cannot be well optimized.

  • A large portion of the Quill DSL has been moved outside of QueryDsl into the top level under the io.getquill package. Due to this change, it may be necessary to import io.getquill.Query if you are not already importing io.getquill._.

3.6.0-RC1

Migration Notes:

  • This realease is not binary compatible with any Quill version before 3.5.3.

  • Any code generated by the Quill Code Generator with quote { ... } blocks will have to be regenerated with this Quill version if generated before 3.5.3.

  • In most SQL dialects (i.e. everything except Postgres) boolean literals and expressions yielding them are not supported so statements such as SELECT foo=bar FROM ... are not supported. In order to get equivalent logic, it is necessary to user case-statements e.g.

    SELECT CASE WHERE foo=bar THEN 1 ELSE 0`.

    On the other hand, in a WHERE-clause, it is the opposite:

    SELECT ... WHERE CASE WHEN (...) foo ELSE bar`

    is invalid and needs to be rewritten. Naively, a 1= could be inserted:

    SELECT ... WHERE 1 = (CASE WHEN (...) foo ELSE bar)

    Note that this behavior can disabled via the -Dquill.query.smartBooleans switch when issued during compile-time for compile-time queries and during runtime for runtime queries.

    Additionally, in certain situations, it is far more preferable to express this without the CASE WHEN construct:

    SELECT ... WHERE ((...) && foo) || !(...) && foo

    This is because CASE statements in SQL are not sargable and generally cannot be well optimized.

  • A large portion of the Quill DSL has been moved outside of QueryDsl into the top level under the io.getquill package. Due to this change, it may be necessary to import io.getquill.Query if you are not already importing io.getquill._.

3.5.3

Please skip this release and proceed directly to the 3.6.0-RC line. This release was originally a test-bed for the new Quats-based functionality which was supposed to be a strictly internal mechanism. Unfortunately multiple issues were found. They will be addressed in the 3.6.X line.

Migration Notes:`

  • Quill 3.5.3 is source-compatible but not binary-compatible with Quill 3.5.2.
  • Any code generated by the Quill Code Generator with quote { ... } blocks will have to be regenerated with Quill 3.5.3 as the AST has substantially changed.
  • The implementation of Quill Application Types (Quats) has changed the internals of nested query expansion. Queries with a querySchema or a schemaMeta will be aliased between nested clauses slightly differently. Given:
    case class Person(firstName:String, lastName:String)
    val ctx = new SqlMirrorContext(PostgresDialect, Literal)
    
    Before:
    SELECT x.first_name, x.last_name FROM (
      SELECT x.first_name, x.last_name FROM person x) AS x
    
    After:
    SELECT x.firstName, x.lastName FROM (
      SELECT x.first_name AS firstName, x.last_name AS lastName FROM person x) AS x
    
    Note however that the semantic result of the queries should be the same. No user-level code change for this should be required.

3.5.2

Migration Notes:

  • Much of the content in QueryDsl has been moved to the top-level for better portability with the upcoming Dotty implementation. This means that things like Query are no longer part of Context but now are directly in the io.getquill package. If you are importing io.getquill._ your code should be unaffected.
  • Custom decoders written for Finalge Postgres no longer require a ClassTag.

3.5.1

3.5.0

3.4.10

3.4.9

3.4.8

Documentation Updates:

Migration Notes:

  • Monix 3.0.0 is not binary compatible with 3.0.0-RC3 which was a dependency of Quill 3.4.7. If you are using the Quill Monix modules, please update your dependencies accordingly.

3.4.7

3.4.6

3.4.5

3.4.4

3.4.3

3.4.2

Migration Notes:

  • NamingStrategy is no longer applied on column and table names defined in querySchema, all column and table names defined in querySchema are now final. If you are relying on this behavior to name your columns/tables correctly, you will need to update your querySchema objects.

3.4.1

Migration Notes:

  • Nested sub-queries will now have their terms re-ordered in certain circumstances although the functionality of the entire query should not change. If you have deeply nested queries with Infixes, double check that they are in the correct position.

3.4.0

Migration Notes:

  • Infixes are now not treated as pure functions by default. This means wherever they are used, nested queries may be created. You can use .pure (e.g. infix"MY_PURE_UDF".pure.as[T]) to revert to the previous behavior. See the Infix section of the documentation for more detail.

3.3.0

Noteworthy Version Bumps:

  • monix - 3.0.0-RC3
  • cassandra-driver-core - 3.7.2
  • orientdb-graphdb - 3.0.21
  • postgresql - 42.2.6
  • sqlite-jdbc - 3.28.0

Migration Notes:

  • The returning method no long excludes the specified ID column from the insertion as it used to. Use the returningGenerated method in order to achieve that. See the 'Database-generated values' section of the documentation for more detail.
  • The == method now works Scala-idiomatically. That means that when two Option[T]-wrapped columns are compared, None == None will now yield true. The === operator can be used in order to compare Option[T]-wrapped columns in a ANSI-SQL idiomatic way i.e. None == None := false. See the 'equals' section of the documentation for more detail.

3.2.0

3.1.0

3.0.1

3.0.0

Migration notes

  • io.getquill.CassandraStreamContext is moved into quill-cassandra-monix module and now uses Monix 3.
  • io.getquill.CassandraMonixContext has been introduced which should eventually replace io.getquill.CassandraStreamContext.
  • Spark queries with nested objects will now rely on the star * operator and struct function to generate sub-schemas as opposed to full expansion of the selection.
  • Most functionality from JdbcContext has been moved to JdbcContextBase for the sake of re-usability. JdbcContext is only intended to be used for synchronous JDBC.

2.6.0

Migration notes

  • When the infix starts with a query, the resulting sql query won't be nested

2.5.4

2.5.0, 2.5.1, 2.5.2, and 2.5.3

Broken releases, do not use.

2.4.2

2.4.1

2.3.3

2.3.2

2.3.1

2.3.0

2.2.0

2.1.0

2.0.0

We're proud to announce the Quill 2.0. All bugs were fixed, so this release doesn't have any known bugs!

Fixes

#872, #874, #875, #877, #879, #889, #890, #892, #894, #897, #899, #900, #903, #902, #904, #906, #907, #908, #909, #910, #913, #915, #917, #920, #921, #925, #928

Migration notes

  • Sources now take a parameter for idiom and naming strategy instead of just type parameters. For instance, new SqlSource[MysqlDialect, Literal] becomes new SqlSource(MysqlDialect, Literal).
  • Composite naming strategies don't use mixing anymore. Instead of the type Literal with UpperCase, use parameter value NamingStrategy(Literal, UpperCase).
  • Anonymous classes aren't supported for function declaration anymore. Use a method with a type parameter instead. For instance, replace val q = quote { new { def apply[T](q: Query[T]) = ... } } by def q[T] = quote { (q: Query[T] => ... }

1.4.0

Migration notes

  • quill-async contexts: java.time.LocalDate now supports only date sql types, java.time.LocalDateTime - only timestamp sql types. Joda times follow this conventions accordingly. Exception is made to java.util.Date it supports both date and timestamp types due to historical moments (java.sql.Timestamp extents java.util.Date).
  • quill-jdbc encoders do not accept java.sql.Types as a first parameter anymore.

1.3.0

1.2.1

1.1.1

see migration notes below

Migration notes

  • Cassandra context property ctx.session.addressTranslater is renamed to ctx.session.addressTranslator

1.1.0

see migration notes below

Migration notes

  • JDBC contexts are implemented in separate classes - PostgresJdbcContext, MysqlJdbcContext, SqliteJdbcContext, H2JdbcContext
  • all contexts are supplied with default java.util.UUID encoder and decoder

1.0.1

1.0.0-RC1 - 20-Oct-2016

Migration notes

  • New API for schema definition: query[Person].schema(_.entity("people").columns(_.id -> "person_id") becomes querySchema[Person]("People", _.id -> "person_id"). Note that the entity name ("People") is now always required.
  • WrappedValue[T] no longer exists, Quill can now automatically encode AnyVals.

0.10.0 - 5-Sep-2016

see migration notes below

Migration notes

  • mappedEncoding has been renamed to MappedEncoding.
  • The way we add async drivers has been changed. To add mysql async to your project use quill-async-mysql and for postgre async quill-async-postgres. It is no longer necessary to add quill-async by yourself.
  • Action assignments and equality operations are now typesafe. If there's a type mismatch between the operands, the quotation will not compile.

0.9.0 - 22-Aug-2016

see migration notes below

Migration notes

  • The fallback mechanism that looks for implicit encoders defined in the context instance has been removed. This means that if you don't import context._, you have to change the specific imports to include the encoders in use.
  • context.run now receives only one parameter. The second parameter that used to receive runtime values now doesn't exist any more. Use lift or liftQuery instead.
  • Use liftQuery + foreach to perform batch actions and define contains/in queries.
  • insert now always receives a parameter, that can be a case class.
  • Non-lifted collections aren't supported anymore. Example: query[Person].filter(t => List(10, 20).contains(p.age)). Use liftQuery instead.
  • schema(_.generated()) has been replaced by returning.

0.8.0 / 17-Jul-2016

see migration notes below

Migration notes

This version introduces Context as a relacement for Source. This change makes the quotation creation dependent on the context to open the path for a few refactorings and improvements we're planning to work on before the 1.0-RC1 release.

Migration steps:

  • Remove any import that is not import io.getquill._
  • Replace the Source creation by a Context creation. See the readme for more details. All types necessary to create the context instances are provided by import io.getquill._.
  • Instead of importing from io.getquill._ to create quotations, import from you context instance import myContext._. The context import will provide all types and methods to interact with quotations and the database.
  • See the documentation about dependent contexts in case you get compilation errors because of type mismatches.

0.7.0 / 2-Jul-2016

0.6.0 / 9-May-2016

0.5.0 / 17-Mar-2016

0.4.1 / 28-Feb-2016

0.4.0 / 19-Feb-2016

0.3.1 / 01-Feb-2016

0.3.0 / 26-Jan-2016

0.2.1 / 28-Dec-2015

0.2.0 / 24-Dec-2015

0.1.0 / 27-Nov-2015

  • Initial release