A dataset represents an SQL query, or more generally, an abstract set of rows in the database. Datasets can be used to create, retrieve, update and delete records.
Query results are always retrieved on demand, so a dataset can be kept around and reused indefinitely (datasets never cache results):
my_posts = DB[:posts].filter(:author => 'david') # no records are retrieved my_posts.all # records are retrieved my_posts.all # records are retrieved again
Most dataset methods return modified copies of the dataset (functional style), so you can reuse different datasets to access data:
posts = DB[:posts] davids_posts = posts.filter(:author => 'david') old_posts = posts.filter('stamp < ?', Date.today - 7) davids_old_posts = davids_posts.filter('stamp < ?', Date.today - 7)
Datasets are Enumerable objects, so they can be manipulated using any of the Enumerable methods, such as map, inject, etc.
For more information, see the "Dataset Basics" guide.
These methods all return modified copies of the receiver.
The dataset options that require the removal of cached columns if changed.
These symbols have _join methods created (e.g. inner_join) that call #join_table with the symbol, passing along the arguments and block from the method call.
Hash of extension name symbols to callable objects to load the extension into the Dataset object (usually by extending it with a module defined in the extension).
All methods that return modified datasets with a joined table added.
Which options don't affect the SQL generation. Used by simple_select_all? to determine if this is a simple SELECT * FROM table.
Methods that return modified datasets
These symbols have _join methods created (e.g. natural_join) that call #join_table with the symbol. They only accept a single table argument which is passed to #join_table, and they raise an error if called with a block.
Register an extension callback for Dataset objects. ext should be the extension name symbol, and mod should either be a Module that the dataset is extended with, or a callable object called with the database object. If mod is not provided, a block can be provided and is treated as the mod object.
If mod is a module, this also registers a Database extension that will extend all of the database's datasets.
# File lib/sequel/dataset/query.rb, line 53 def self.register_extension(ext, mod=nil, &block) if mod raise(Error, "cannot provide both mod and block to Dataset.register_extension") if block if mod.is_a?(Module) block = proc{|ds| ds.extend(mod)} Sequel::Database.register_extension(ext){|db| db.extend_datasets(mod)} else block = mod end end Sequel.synchronize{EXTENSIONS[ext] = block} end
Adds an further filter to an existing filter using AND. If no filter exists an error is raised. This method is identical to filter except it expects an existing filter.
DB[:table].filter(:a).and(:b) # SELECT * FROM table WHERE a AND b
# File lib/sequel/dataset/query.rb, line 71 def and(*cond, &block) raise(InvalidOperation, "No existing filter found.") unless @opts[:having] || @opts[:where] filter(*cond, &block) end
Returns a new clone of the dataset with with the given options merged. If the options changed include options in COLUMN_CHANGE_OPTS, the cached columns are deleted. This method should generally not be called directly by user code.
# File lib/sequel/dataset/query.rb, line 80 def clone(opts = {}) c = super() c.opts = @opts.merge(opts) c.instance_variable_set(:@columns, nil) if opts.keys.any?{|o| COLUMN_CHANGE_OPTS.include?(o)} c end
Returns a copy of the dataset with the SQL DISTINCT clause. The DISTINCT clause is used to remove duplicate rows from the output. If arguments are provided, uses a DISTINCT ON clause, in which case it will only be distinct on those columns, instead of all returned columns. Raises an error if arguments are given and DISTINCT ON is not supported.
DB[:items].distinct # SQL: SELECT DISTINCT * FROM items DB[:items].order(:id).distinct(:id) # SQL: SELECT DISTINCT ON (id) * FROM items ORDER BY id
# File lib/sequel/dataset/query.rb, line 96 def distinct(*args) raise(InvalidOperation, "DISTINCT ON not supported") if !args.empty? && !supports_distinct_on? clone(:distinct => args) end
Adds an EXCEPT clause using a second dataset object. An EXCEPT compound
dataset returns all rows in the current dataset that are not in the given
dataset. Raises an InvalidOperation
if the operation is not
supported. Options:
Use the given value as the #from_self alias
Set to true to use EXCEPT ALL instead of EXCEPT, so duplicate rows can occur
Set to false to not wrap the returned dataset in a #from_self, use with care.
DB[:items].except(DB[:other_items]) # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS t1 DB[:items].except(DB[:other_items], :all=>true, :from_self=>false) # SELECT * FROM items EXCEPT ALL SELECT * FROM other_items DB[:items].except(DB[:other_items], :alias=>:i) # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb, line 118 def except(dataset, opts={}) opts = {:all=>opts} unless opts.is_a?(Hash) raise(InvalidOperation, "EXCEPT not supported") unless supports_intersect_except? raise(InvalidOperation, "EXCEPT ALL not supported") if opts[:all] && !supports_intersect_except_all? compound_clone(:except, dataset, opts) end
Performs the inverse of #filter. Note that if you have multiple filter conditions, this is not the same as a negation of all conditions.
DB[:items].exclude(:category => 'software') # SELECT * FROM items WHERE (category != 'software') DB[:items].exclude(:category => 'software', :id=>3) # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
# File lib/sequel/dataset/query.rb, line 133 def exclude(*cond, &block) _filter_or_exclude(true, @opts[:having] ? :having : :where, *cond, &block) end
Inverts the given conditions and adds them to the HAVING clause.
DB[:items].select_group(:name).exclude_having{count(name) < 2} # SELECT name FROM items GROUP BY name HAVING (count(name) >= 2)
# File lib/sequel/dataset/query.rb, line 141 def exclude_having(*cond, &block) _filter_or_exclude(true, :having, *cond, &block) end
Inverts the given conditions and adds them to the WHERE clause.
DB[:items].select_group(:name).exclude_where(:category => 'software') # SELECT * FROM items WHERE (category != 'software') DB[:items].select_group(:name). exclude_having{count(name) < 2}. exclude_where(:category => 'software') # SELECT name FROM items WHERE (category != 'software') # GROUP BY name HAVING (count(name) >= 2)
# File lib/sequel/dataset/query.rb, line 155 def exclude_where(*cond, &block) _filter_or_exclude(true, :where, *cond, &block) end
Return a clone of the dataset loaded with the extensions, see extension!.
# File lib/sequel/dataset/query.rb, line 160 def extension(*exts) clone.extension!(*exts) end
Load an extension into the receiver. In addition to requiring the extension file, this also modifies the dataset to work with the extension (usually extending it with a module defined in the extension file). If no related extension file exists or the extension does not have specific support for Database objects, an Error will be raised. Returns self.
# File lib/sequel/dataset/query.rb, line 169 def extension!(*exts) Sequel.extension(*exts) exts.each do |ext| if pr = Sequel.synchronize{EXTENSIONS[ext]} pr.call(self) else raise(Error, "Extension #{ext} does not have specific support handling individual datasets") end end self end
Returns a copy of the dataset with the given conditions imposed upon it.
If the query already has a HAVING clause, then the conditions are imposed in the HAVING clause. If not, then they are imposed in the WHERE clause.
filter accepts the following argument types:
Hash - list of equality/inclusion expressions
Array - depends:
If first member is a string, assumes the rest of the arguments are parameters and interpolates them into the string.
If all members are arrays of length two, treats the same way as a hash, except it allows for duplicate keys to be specified.
Otherwise, treats each argument as a separate condition.
String - taken literally
Symbol - taken as a boolean column argument (e.g. WHERE active)
Sequel::SQL::BooleanExpression - an existing condition expression, probably created using the Sequel expression filter DSL.
filter also takes a block, which should return one of the above argument types, and is treated the same way. This block yields a virtual row object, which is easy to use to create identifiers and functions. For more details on the virtual row support, see the "Virtual Rows" guide
If both a block and regular argument are provided, they get ANDed together.
Examples:
DB[:items].filter(:id => 3) # SELECT * FROM items WHERE (id = 3) DB[:items].filter('price < ?', 100) # SELECT * FROM items WHERE price < 100 DB[:items].filter([[:id, [1,2,3]], [:id, 0..10]]) # SELECT * FROM items WHERE ((id IN (1, 2, 3)) AND ((id >= 0) AND (id <= 10))) DB[:items].filter('price < 100') # SELECT * FROM items WHERE price < 100 DB[:items].filter(:active) # SELECT * FROM items WHERE :active DB[:items].filter{price < 100} # SELECT * FROM items WHERE (price < 100)
Multiple filter calls can be chained for scoping:
software = dataset.filter(:category => 'software').filter{price < 100} # SELECT * FROM items WHERE ((category = 'software') AND (price < 100))
See the the "Dataset Filtering" guide for more examples and details.
# File lib/sequel/dataset/query.rb, line 233 def filter(*cond, &block) _filter(@opts[:having] ? :having : :where, *cond, &block) end
Returns a cloned dataset with a :update lock style.
DB[:table].for_update # SELECT * FROM table FOR UPDATE
# File lib/sequel/dataset/query.rb, line 240 def for_update lock_style(:update) end
Returns a copy of the dataset with the source changed. If no source is given, removes all tables. If multiple sources are given, it is the same as using a CROSS JOIN (cartesian product) between all tables.
DB[:items].from # SQL: SELECT * DB[:items].from(:blah) # SQL: SELECT * FROM blah DB[:items].from(:blah, :foo) # SQL: SELECT * FROM blah, foo
# File lib/sequel/dataset/query.rb, line 251 def from(*source) table_alias_num = 0 sources = [] ctes = nil source.each do |s| case s when Hash s.each{|k,v| sources << SQL::AliasedExpression.new(k,v)} when Dataset if hoist_cte?(s) ctes ||= [] ctes += s.opts[:with] s = s.clone(:with=>nil) end sources << SQL::AliasedExpression.new(s, dataset_alias(table_alias_num+=1)) when Symbol sch, table, aliaz = split_symbol(s) if aliaz s = sch ? SQL::QualifiedIdentifier.new(sch, table) : SQL::Identifier.new(table) sources << SQL::AliasedExpression.new(s, aliaz.to_sym) else sources << s end else sources << s end end o = {:from=>sources.empty? ? nil : sources} o[:with] = (opts[:with] || []) + ctes if ctes o[:num_dataset_sources] = table_alias_num if table_alias_num > 0 clone(o) end
Returns a dataset selecting from the current dataset. Supplying the :alias option controls the alias of the result.
ds = DB[:items].order(:name).select(:id, :name) # SELECT id,name FROM items ORDER BY name ds.from_self # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS t1 ds.from_self(:alias=>:foo) # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo
# File lib/sequel/dataset/query.rb, line 295 def from_self(opts={}) fs = {} @opts.keys.each{|k| fs[k] = nil unless NON_SQL_OPTIONS.include?(k)} clone(fs).from(opts[:alias] ? as(opts[:alias]) : self) end
Match any of the columns to any of the patterns. The terms can be strings (which use LIKE) or regular expressions (which are only supported on MySQL and PostgreSQL). Note that the total number of pattern matches will be Array(columns).length * Array(terms).length, which could cause performance issues.
Options (all are boolean):
All columns must be matched to any of the given patterns.
All patterns must match at least one of the columns.
Use a case insensitive pattern match (the default is case sensitive if the database supports it).
If both :all_columns and :all_patterns are true, all columns must match all patterns.
Examples:
dataset.grep(:a, '%test%') # SELECT * FROM items WHERE (a LIKE '%test%') dataset.grep([:a, :b], %w%test% foo') # SELECT * FROM items WHERE ((a LIKE '%test%') OR (a LIKE 'foo') OR (b LIKE '%test%') OR (b LIKE 'foo')) dataset.grep([:a, :b], %w%foo% %bar%', :all_patterns=>true) # SELECT * FROM a WHERE (((a LIKE '%foo%') OR (b LIKE '%foo%')) AND ((a LIKE '%bar%') OR (b LIKE '%bar%'))) dataset.grep([:a, :b], %w%foo% %bar%', :all_columns=>true) # SELECT * FROM a WHERE (((a LIKE '%foo%') OR (a LIKE '%bar%')) AND ((b LIKE '%foo%') OR (b LIKE '%bar%'))) dataset.grep([:a, :b], %w%foo% %bar%', :all_patterns=>true, :all_columns=>true) # SELECT * FROM a WHERE ((a LIKE '%foo%') AND (b LIKE '%foo%') AND (a LIKE '%bar%') AND (b LIKE '%bar%'))
# File lib/sequel/dataset/query.rb, line 332 def grep(columns, patterns, opts={}) if opts[:all_patterns] conds = Array(patterns).map do |pat| SQL::BooleanExpression.new(opts[:all_columns] ? :AND : :OR, *Array(columns).map{|c| SQL::StringExpression.like(c, pat, opts)}) end filter(SQL::BooleanExpression.new(opts[:all_patterns] ? :AND : :OR, *conds)) else conds = Array(columns).map do |c| SQL::BooleanExpression.new(:OR, *Array(patterns).map{|pat| SQL::StringExpression.like(c, pat, opts)}) end filter(SQL::BooleanExpression.new(opts[:all_columns] ? :AND : :OR, *conds)) end end
Returns a copy of the dataset with the results grouped by the value of the
given columns. If a block is given, it is treated as a virtual row block,
similar to filter
.
DB[:items].group(:id) # SELECT * FROM items GROUP BY id DB[:items].group(:id, :name) # SELECT * FROM items GROUP BY id, name DB[:items].group{[a, sum(b)]} # SELECT * FROM items GROUP BY a, sum(b)
# File lib/sequel/dataset/query.rb, line 353 def group(*columns, &block) virtual_row_columns(columns, block) clone(:group => (columns.compact.empty? ? nil : columns)) end
Returns a dataset grouped by the given column with count by group. Column
aliases may be supplied, and will be included in the select clause. If a
block is given, it is treated as a virtual row block, similar to
filter
.
Examples:
DB[:items].group_and_count(:name).all # SELECT name, count(*) AS count FROM items GROUP BY name # => [{:name=>'a', :count=>1}, ...] DB[:items].group_and_count(:first_name, :last_name).all # SELECT first_name, last_name, count(*) AS count FROM items GROUP BY first_name, last_name # => [{:first_name=>'a', :last_name=>'b', :count=>1}, ...] DB[:items].group_and_count(:first_name___name).all # SELECT first_name AS name, count(*) AS count FROM items GROUP BY first_name # => [{:name=>'a', :count=>1}, ...] DB[:items].group_and_count{substr(first_name, 1, 1).as(initial)}.all # SELECT substr(first_name, 1, 1) AS initial, count(*) AS count FROM items GROUP BY substr(first_name, 1, 1) # => [{:initial=>'a', :count=>1}, ...]
# File lib/sequel/dataset/query.rb, line 384 def group_and_count(*columns, &block) select_group(*columns, &block).select_more(COUNT_OF_ALL_AS_COUNT) end
Alias of group
# File lib/sequel/dataset/query.rb, line 359 def group_by(*columns, &block) group(*columns, &block) end
Adds the appropriate CUBE syntax to GROUP BY.
# File lib/sequel/dataset/query.rb, line 389 def group_cube raise Error, "GROUP BY CUBE not supported on #{db.database_type}" unless supports_group_cube? clone(:group_options=>:cube) end
Adds the appropriate ROLLUP syntax to GROUP BY.
# File lib/sequel/dataset/query.rb, line 395 def group_rollup raise Error, "GROUP BY ROLLUP not supported on #{db.database_type}" unless supports_group_rollup? clone(:group_options=>:rollup) end
Adds an INTERSECT clause using a second dataset object. An INTERSECT
compound dataset returns all rows in both the current dataset and the given
dataset. Raises an InvalidOperation
if the operation is not
supported. Options:
Use the given value as the #from_self alias
Set to true to use INTERSECT ALL instead of INTERSECT, so duplicate rows can occur
Set to false to not wrap the returned dataset in a #from_self, use with care.
DB[:items].intersect(DB[:other_items]) # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS t1 DB[:items].intersect(DB[:other_items], :all=>true, :from_self=>false) # SELECT * FROM items INTERSECT ALL SELECT * FROM other_items DB[:items].intersect(DB[:other_items], :alias=>:i) # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb, line 425 def intersect(dataset, opts={}) opts = {:all=>opts} unless opts.is_a?(Hash) raise(InvalidOperation, "INTERSECT not supported") unless supports_intersect_except? raise(InvalidOperation, "INTERSECT ALL not supported") if opts[:all] && !supports_intersect_except_all? compound_clone(:intersect, dataset, opts) end
Inverts the current filter.
DB[:items].filter(:category => 'software').invert # SELECT * FROM items WHERE (category != 'software') DB[:items].filter(:category => 'software', :id=>3).invert # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
# File lib/sequel/dataset/query.rb, line 439 def invert having, where = @opts[:having], @opts[:where] raise(Error, "No current filter") unless having || where o = {} o[:having] = SQL::BooleanExpression.invert(having) if having o[:where] = SQL::BooleanExpression.invert(where) if where clone(o) end
Alias of inner_join
# File lib/sequel/dataset/query.rb, line 449 def join(*args, &block) inner_join(*args, &block) end
Returns a joined dataset. Not usually called directly, users should use
the appropriate join method (e.g. join, left_join, natural_join,
cross_join) which fills in the type
argument.
Takes the following arguments:
type - The type of join to do (e.g. :inner)
table - Depends on type:
expr - specifies conditions, depends on type:
Hash, Array of two element arrays - Assumes key (1st arg) is column of joined table (unless already qualified), and value (2nd arg) is column of the last joined or primary table (or the :implicit_qualifier option). To specify multiple conditions on a single joined table column, you must use an array. Uses a JOIN with an ON clause.
Array - If all members of the array are symbols, considers them as columns and uses a JOIN with a USING clause. Most databases will remove duplicate columns from the result set if this is used.
nil - If a block is not given, doesn't use ON or USING, so the JOIN should be a NATURAL or CROSS join. If a block is given, uses an ON clause based on the block, see below.
Everything else - pretty much the same as a using the argument in a call to filter, so strings are considered literal, symbols specify boolean columns, and Sequel expressions can be used. Uses a JOIN with an ON clause.
options - a hash of options, with any of the following keys:
:table_alias - the name of the table's alias when joining, necessary for joining to the same table more than once. No alias is used by default.
:implicit_qualifier - The name to use for qualifying implicit conditions. By default, the last joined or primary table is used.
:qualify - Can be set to false to not do any implicit qualification. Can be set to :deep to use the Qualifier AST Transformer, which will attempt to qualify subexpressions of the expression tree.
block - The block argument should only be given if a JOIN with an ON clause is used, in which case it yields the
table alias/name for the table currently being joined, the table alias/name
for the last joined (or first table), and an array of previous SQL::JoinClause. Unlike filter
,
this block is not treated as a virtual row block.
Examples:
DB[:a].join_table(:cross, :b) # SELECT * FROM a CROSS JOIN b DB[:a].join_table(:inner, DB[:b], :c=>d) # SELECT * FROM a INNER JOIN (SELECT * FROM b) AS t1 ON (t1.c = a.d) DB[:a].join_table(:left, :b___c, [:d]) # SELECT * FROM a LEFT JOIN b AS c USING (d) DB[:a].natural_join(:b).join_table(:inner, :c) do |ta, jta, js| (Sequel.qualify(ta, :d) > Sequel.qualify(jta, :e)) & {Sequel.qualify(ta, :f)=>DB.from(js.first.table).select(:g)} end # SELECT * FROM a NATURAL JOIN b INNER JOIN c # ON ((c.d > b.e) AND (c.f IN (SELECT g FROM b)))
# File lib/sequel/dataset/query.rb, line 506 def join_table(type, table, expr=nil, options={}, &block) if hoist_cte?(table) s, ds = hoist_cte(table) return s.join_table(type, ds, expr, options, &block) end using_join = expr.is_a?(Array) && !expr.empty? && expr.all?{|x| x.is_a?(Symbol)} if using_join && !supports_join_using? h = {} expr.each{|s| h[s] = s} return join_table(type, table, h, options) end case options when Hash table_alias = options[:table_alias] last_alias = options[:implicit_qualifier] qualify_type = options[:qualify] when Symbol, String, SQL::Identifier table_alias = options last_alias = nil else raise Error, "invalid options format for join_table: #{options.inspect}" end if Dataset === table if table_alias.nil? table_alias_num = (@opts[:num_dataset_sources] || 0) + 1 table_alias = dataset_alias(table_alias_num) end table_name = table_alias else table, implicit_table_alias = split_alias(table) table_alias ||= implicit_table_alias table_name = table_alias || table end join = if expr.nil? and !block SQL::JoinClause.new(type, table, table_alias) elsif using_join raise(Sequel::Error, "can't use a block if providing an array of symbols as expr") if block SQL::JoinUsingClause.new(expr, type, table, table_alias) else last_alias ||= @opts[:last_joined_table] || first_source_alias if Sequel.condition_specifier?(expr) expr = expr.collect do |k, v| case qualify_type when false nil # Do no qualification when :deep k = Sequel::Qualifier.new(self, table_name).transform(k) v = Sequel::Qualifier.new(self, last_alias).transform(v) else k = qualified_column_name(k, table_name) if k.is_a?(Symbol) v = qualified_column_name(v, last_alias) if v.is_a?(Symbol) end [k,v] end expr = SQL::BooleanExpression.from_value_pairs(expr) end if block expr2 = yield(table_name, last_alias, @opts[:join] || []) expr = expr ? SQL::BooleanExpression.new(:AND, expr, expr2) : expr2 end SQL::JoinOnClause.new(expr, type, table, table_alias) end opts = {:join => (@opts[:join] || []) + [join], :last_joined_table => table_name} opts[:num_dataset_sources] = table_alias_num if table_alias_num clone(opts) end
If given an integer, the dataset will contain only the first l results. If given a range, it will contain only those at offsets within that range. If a second argument is given, it is used as an offset. To use an offset without a limit, pass nil as the first argument.
DB[:items].limit(10) # SELECT * FROM items LIMIT 10 DB[:items].limit(10, 20) # SELECT * FROM items LIMIT 10 OFFSET 20 DB[:items].limit(10...20) # SELECT * FROM items LIMIT 10 OFFSET 10 DB[:items].limit(10..20) # SELECT * FROM items LIMIT 11 OFFSET 10 DB[:items].limit(nil, 20) # SELECT * FROM items OFFSET 20
# File lib/sequel/dataset/query.rb, line 595 def limit(l, o = (no_offset = true; nil)) return from_self.limit(l, o) if @opts[:sql] if Range === l o = l.first l = l.last - l.first + (l.exclude_end? ? 0 : 1) end l = l.to_i if l.is_a?(String) && !l.is_a?(LiteralString) if l.is_a?(Integer) raise(Error, 'Limits must be greater than or equal to 1') unless l >= 1 end opts = {:limit => l} if o o = o.to_i if o.is_a?(String) && !o.is_a?(LiteralString) if o.is_a?(Integer) raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0 end opts[:offset] = o elsif !no_offset opts[:offset] = nil end clone(opts) end
Returns a cloned dataset with the given lock style. If style is a string, it will be used directly. Otherwise, a symbol may be used for database independent locking. Currently :update is respected by most databases, and :share is supported by some.
DB[:items].lock_style('FOR SHARE') # SELECT * FROM items FOR SHARE
# File lib/sequel/dataset/query.rb, line 625 def lock_style(style) clone(:lock => style) end
Returns a cloned dataset without a row_proc.
ds = DB[:items] ds.row_proc = proc{|r| r.invert} ds.all # => [{2=>:id}] ds.naked.all # => [{:id=>2}]
# File lib/sequel/dataset/query.rb, line 635 def naked ds = clone ds.row_proc = nil ds end
Adds an alternate filter to an existing filter using OR. If no filter
exists an Error
is raised.
DB[:items].filter(:a).or(:b) # SELECT * FROM items WHERE a OR b
# File lib/sequel/dataset/query.rb, line 645 def or(*cond, &block) clause = (@opts[:having] ? :having : :where) raise(InvalidOperation, "No existing filter found.") unless @opts[clause] cond = cond.first if cond.size == 1 clone(clause => SQL::BooleanExpression.new(:OR, @opts[clause], filter_expr(cond, &block))) end
Returns a copy of the dataset with the order changed. If the dataset has an
existing order, it is ignored and overwritten with this order. If a nil is
given the returned dataset has no order. This can accept multiple arguments
of varying kinds, such as SQL functions. If a block
is given, it is treated as a virtual row block, similar to
filter
.
DB[:items].order(:name) # SELECT * FROM items ORDER BY name DB[:items].order(:a, :b) # SELECT * FROM items ORDER BY a, b DB[:items].order(Sequel.lit('a + b')) # SELECT * FROM items ORDER BY a + b DB[:items].order(:a + :b) # SELECT * FROM items ORDER BY (a + b) DB[:items].order(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name DESC DB[:items].order(Sequel.asc(:name, :nulls=>:last)) # SELECT * FROM items ORDER BY name ASC NULLS LAST DB[:items].order{sum(name).desc} # SELECT * FROM items ORDER BY sum(name) DESC DB[:items].order(nil) # SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 666 def order(*columns, &block) virtual_row_columns(columns, block) clone(:order => (columns.compact.empty?) ? nil : columns) end
Alias of #order_more, for naming consistency with order_prepend.
# File lib/sequel/dataset/query.rb, line 672 def order_append(*columns, &block) order_more(*columns, &block) end
Alias of order
# File lib/sequel/dataset/query.rb, line 677 def order_by(*columns, &block) order(*columns, &block) end
Returns a copy of the dataset with the order columns added to the end of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b DB[:items].order(:a).order_more(:b) # SELECT * FROM items ORDER BY a, b
# File lib/sequel/dataset/query.rb, line 686 def order_more(*columns, &block) columns = @opts[:order] + columns if @opts[:order] order(*columns, &block) end
Returns a copy of the dataset with the order columns added to the beginning of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b DB[:items].order(:a).order_prepend(:b) # SELECT * FROM items ORDER BY b, a
# File lib/sequel/dataset/query.rb, line 696 def order_prepend(*columns, &block) ds = order(*columns, &block) @opts[:order] ? ds.order_more(*@opts[:order]) : ds end
Qualify to the given table, or first source if no table is given.
DB[:items].filter(:id=>1).qualify # SELECT items.* FROM items WHERE (items.id = 1) DB[:items].filter(:id=>1).qualify(:i) # SELECT i.* FROM items WHERE (i.id = 1)
# File lib/sequel/dataset/query.rb, line 708 def qualify(table=first_source) qualify_to(table) end
Return a copy of the dataset with unqualified identifiers in the SELECT, WHERE, GROUP, HAVING, and ORDER clauses qualified by the given table. If no columns are currently selected, select all columns of the given table.
DB[:items].filter(:id=>1).qualify_to(:i) # SELECT i.* FROM items WHERE (i.id = 1)
# File lib/sequel/dataset/query.rb, line 719 def qualify_to(table) o = @opts return clone if o[:sql] h = {} (o.keys & QUALIFY_KEYS).each do |k| h[k] = qualified_expression(o[k], table) end h[:select] = [SQL::ColumnAll.new(table)] if !o[:select] || o[:select].empty? clone(h) end
Qualify the dataset to its current first source. This is useful if you
have unqualified identifiers in the query that all refer to the first
source, and you want to join to another table which has columns with the
same name as columns in the current dataset. See qualify_to
.
DB[:items].filter(:id=>1).qualify_to_first_source # SELECT items.* FROM items WHERE (items.id = 1)
# File lib/sequel/dataset/query.rb, line 738 def qualify_to_first_source qualify_to(first_source) end
Modify the RETURNING clause, only
supported on a few databases. If returning is used, instead of insert
returning the autogenerated primary key or update/delete returning the
number of modified rows, results are returned using
fetch_rows
.
DB[:items].returning # RETURNING * DB[:items].returning(nil) # RETURNING NULL DB[:items].returning(:id, :name) # RETURNING id, name
# File lib/sequel/dataset/query.rb, line 750 def returning(*values) clone(:returning=>values) end
Returns a copy of the dataset with the order reversed. If no order is given, the existing order is inverted.
DB[:items].reverse(:id) # SELECT * FROM items ORDER BY id DESC DB[:items].reverse{foo(bar)} # SELECT * FROM items ORDER BY foo(bar) DESC DB[:items].order(:id).reverse # SELECT * FROM items ORDER BY id DESC DB[:items].order(:id).reverse(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name ASC
# File lib/sequel/dataset/query.rb, line 761 def reverse(*order, &block) virtual_row_columns(order, block) order(*invert_order(order.empty? ? @opts[:order] : order)) end
Alias of reverse
# File lib/sequel/dataset/query.rb, line 767 def reverse_order(*order, &block) reverse(*order, &block) end
Returns a copy of the dataset with the columns selected changed to the
given columns. This also takes a virtual row block, similar to
filter
.
DB[:items].select(:a) # SELECT a FROM items DB[:items].select(:a, :b) # SELECT a, b FROM items DB[:items].select{[a, sum(b)]} # SELECT a, sum(b) FROM items
# File lib/sequel/dataset/query.rb, line 778 def select(*columns, &block) virtual_row_columns(columns, block) m = [] columns.each do |i| i.is_a?(Hash) ? m.concat(i.map{|k, v| SQL::AliasedExpression.new(k,v)}) : m << i end clone(:select => m) end
Returns a copy of the dataset selecting the wildcard if no arguments are given. If arguments are given, treat them as tables and select all columns (using the wildcard) from each table.
DB[:items].select(:a).select_all # SELECT * FROM items DB[:items].select_all(:items) # SELECT items.* FROM items DB[:items].select_all(:items, :foo) # SELECT items.*, foo.* FROM items
# File lib/sequel/dataset/query.rb, line 794 def select_all(*tables) if tables.empty? clone(:select => nil) else select(*tables.map{|t| i, a = split_alias(t); a || i}.map{|t| SQL::ColumnAll.new(t)}) end end
Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected, it will select the columns given in addition to *.
DB[:items].select(:a).select(:b) # SELECT b FROM items DB[:items].select(:a).select_append(:b) # SELECT a, b FROM items DB[:items].select_append(:b) # SELECT *, b FROM items
# File lib/sequel/dataset/query.rb, line 809 def select_append(*columns, &block) cur_sel = @opts[:select] if !cur_sel || cur_sel.empty? unless supports_select_all_and_column? return select_all(*(Array(@opts[:from]) + Array(@opts[:join]))).select_more(*columns, &block) end cur_sel = [WILDCARD] end select(*(cur_sel + columns), &block) end
Set both the select and group clauses with the given columns
.
Column aliases may be supplied, and will be included in the select clause.
This also takes a virtual row block similar to filter
.
DB[:items].select_group(:a, :b) # SELECT a, b FROM items GROUP BY a, b DB[:items].select_group(:c___a){f(c2)} # SELECT c AS a, f(c2) FROM items GROUP BY c, f(c2)
# File lib/sequel/dataset/query.rb, line 829 def select_group(*columns, &block) virtual_row_columns(columns, block) select(*columns).group(*columns.map{|c| unaliased_identifier(c)}) end
Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected it will just select the columns given.
DB[:items].select(:a).select(:b) # SELECT b FROM items DB[:items].select(:a).select_more(:b) # SELECT a, b FROM items DB[:items].select_more(:b) # SELECT b FROM items
# File lib/sequel/dataset/query.rb, line 841 def select_more(*columns, &block) columns = @opts[:select] + columns if @opts[:select] select(*columns, &block) end
Set the server for this dataset to use. Used to pick a specific database shard to run a query against, or to override the default (where SELECT uses :read_only database and all other queries use the :default database). This method is always available but is only useful when database sharding is being used.
DB[:items].all # Uses the :read_only or :default server DB[:items].delete # Uses the :default server DB[:items].server(:blah).delete # Uses the :blah server
# File lib/sequel/dataset/query.rb, line 855 def server(servr) clone(:server=>servr) end
Set the default values for insert and update statements. The values hash passed to insert or update are merged into this hash, so any values in the hash passed to insert or update will override values passed to this method.
DB[:items].set_defaults(:a=>'a', :c=>'c').insert(:a=>'d', :b=>'b') # INSERT INTO items (a, c, b) VALUES ('d', 'c', 'b')
# File lib/sequel/dataset/query.rb, line 865 def set_defaults(hash) clone(:defaults=>(@opts[:defaults]||{}).merge(hash)) end
Set values that override hash arguments given to insert and update statements. This hash is merged into the hash provided to insert or update, so values will override any values given in the insert/update hashes.
DB[:items].set_overrides(:a=>'a', :c=>'c').insert(:a=>'d', :b=>'b') # INSERT INTO items (a, c, b) VALUES ('a', 'c', 'b')
# File lib/sequel/dataset/query.rb, line 875 def set_overrides(hash) clone(:overrides=>hash.merge(@opts[:overrides]||{})) end
Unbind bound variables from this dataset's filter and return an array of two objects. The first object is a modified dataset where the filter has been replaced with one that uses bound variable placeholders. The second object is the hash of unbound variables. You can then prepare and execute (or just call) the dataset with the bound variables to get results.
ds, bv = DB[:items].filter(:a=>1).unbind ds # SELECT * FROM items WHERE (a = $a) bv # {:a => 1} ds.call(:select, bv)
# File lib/sequel/dataset/query.rb, line 889 def unbind u = Unbinder.new ds = clone(:where=>u.transform(opts[:where]), :join=>u.transform(opts[:join])) [ds, u.binds] end
Returns a copy of the dataset with no grouping (GROUP or HAVING clause) applied.
DB[:items].group(:a).having(:a=>1).where(:b).ungrouped # SELECT * FROM items WHERE b
# File lib/sequel/dataset/query.rb, line 907 def ungrouped clone(:group => nil, :having => nil) end
Adds a UNION clause using a second dataset object. A UNION compound dataset returns all rows in either the current dataset or the given dataset. Options:
Use the given value as the #from_self alias
Set to true to use UNION ALL instead of UNION, so duplicate rows can occur
Set to false to not wrap the returned dataset in a #from_self, use with care.
DB[:items].union(DB[:other_items]) # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS t1 DB[:items].union(DB[:other_items], :all=>true, :from_self=>false) # SELECT * FROM items UNION ALL SELECT * FROM other_items DB[:items].union(DB[:other_items], :alias=>:i) # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb, line 927 def union(dataset, opts={}) opts = {:all=>opts} unless opts.is_a?(Hash) compound_clone(:union, dataset, opts) end
Returns a copy of the dataset with no limit or offset.
DB[:items].limit(10, 20).unlimited # SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 935 def unlimited clone(:limit=>nil, :offset=>nil) end
Returns a copy of the dataset with no order.
DB[:items].order(:a).unordered # SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 942 def unordered order(nil) end
Add a condition to the WHERE clause. See
filter
for argument types.
DB[:items].group(:a).having(:a).filter(:b) # SELECT * FROM items GROUP BY a HAVING a AND b DB[:items].group(:a).having(:a).where(:b) # SELECT * FROM items WHERE b GROUP BY a HAVING a
# File lib/sequel/dataset/query.rb, line 953 def where(*cond, &block) _filter(:where, *cond, &block) end
Add a common table expression (CTE) with the given name and a dataset that defines the CTE. A common table expression acts as an inline view for the query. Options:
Specify the arguments/columns for the CTE, should be an array of symbols.
Specify that this is a recursive CTE
DB[:items].with(:items, DB[:syx].filter(:name.like('A%'))) # WITH items AS (SELECT * FROM syx WHERE (name LIKE 'A%')) SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 965 def with(name, dataset, opts={}) raise(Error, 'This datatset does not support common table expressions') unless supports_cte? if hoist_cte?(dataset) s, ds = hoist_cte(dataset) s.with(name, ds, opts) else clone(:with=>(@opts[:with]||[]) + [opts.merge(:name=>name, :dataset=>dataset)]) end end
Add a recursive common table expression (CTE) with the given name, a dataset that defines the nonrecursive part of the CTE, and a dataset that defines the recursive part of the CTE. Options:
Specify the arguments/columns for the CTE, should be an array of symbols.
Set to false to use UNION instead of UNION ALL combining the nonrecursive and recursive parts.
DB[:t].with_recursive(:t, DB[:i1].select(:id, :parent_id).filter(:parent_id=>nil), DB[:i1].join(:t, :id=>:parent_id).select(:i1__id, :i1__parent_id), :args=>[:id, :parent_id]) # WITH RECURSIVE "t"("id", "parent_id") AS ( # SELECT "id", "parent_id" FROM "i1" WHERE ("parent_id" IS NULL) # UNION ALL # SELECT "i1"."id", "i1"."parent_id" FROM "i1" INNER JOIN "t" ON ("t"."id" = "i1"."parent_id") # ) SELECT * FROM "t"
# File lib/sequel/dataset/query.rb, line 991 def with_recursive(name, nonrecursive, recursive, opts={}) raise(Error, 'This datatset does not support common table expressions') unless supports_cte? if hoist_cte?(nonrecursive) s, ds = hoist_cte(nonrecursive) s.with_recursive(name, ds, recursive, opts) elsif hoist_cte?(recursive) s, ds = hoist_cte(recursive) s.with_recursive(name, nonrecursive, ds, opts) else clone(:with=>(@opts[:with]||[]) + [opts.merge(:recursive=>true, :name=>name, :dataset=>nonrecursive.union(recursive, {:all=>opts[:union_all] != false, :from_self=>false}))]) end end
Returns a copy of the dataset with the static SQL used. This is useful if you want to keep the same row_proc/graph, but change the SQL used to custom SQL.
DB[:items].with_sql('SELECT * FROM foo') # SELECT * FROM foo
You can use placeholders in your SQL and provide arguments for those placeholders:
DB[:items].with_sql('SELECT ? FROM foo', 1) # SELECT 1 FROM foo
You can also provide a method name and arguments to call to get the SQL:
DB[:items].with_sql(:insert_sql, :b=>1) # INSERT INTO items (b) VALUES (1)
# File lib/sequel/dataset/query.rb, line 1016 def with_sql(sql, *args) if sql.is_a?(Symbol) sql = send(sql, *args) else sql = SQL::PlaceholderLiteralString.new(sql, args) unless args.empty? end clone(:sql=>sql) end
Add the dataset to the list of compounds
# File lib/sequel/dataset/query.rb, line 1028 def compound_clone(type, dataset, opts) if hoist_cte?(dataset) s, ds = hoist_cte(dataset) return s.compound_clone(type, ds, opts) end ds = compound_from_self.clone(:compounds=>Array(@opts[:compounds]).map{|x| x.dup} + [[type, dataset.compound_from_self, opts[:all]]]) opts[:from_self] == false ? ds : ds.from_self(opts) end
Return true if the dataset has a non-nil value for any key in opts.
# File lib/sequel/dataset/query.rb, line 1038 def options_overlap(opts) !(@opts.collect{|k,v| k unless v.nil?}.compact & opts).empty? end
Whether this dataset is a simple SELECT * FROM table.
# File lib/sequel/dataset/query.rb, line 1043 def simple_select_all? o = @opts.reject{|k,v| v.nil? || NON_SQL_OPTIONS.include?(k)} o.length == 1 && (f = o[:from]) && f.length == 1 && (f.first.is_a?(Symbol) || f.first.is_a?(SQL::AliasedExpression)) end
These methods all execute the dataset's SQL on the database. They don't return modified datasets, so if used in a method chain they should be the last method called.
Action methods defined by Sequel that execute code on the database.
Enable column introspection for every dataset.
# File lib/sequel/extensions/columns_introspection.rb, line 64 def self.introspect_all_columns include ColumnsIntrospection remove_method(:columns) if instance_methods(false).map{|x| x.to_s}.include?('columns') end
Inserts the given argument into the database. Returns self so it can be used safely when chaining:
DB[:items] << {:id=>0, :name=>'Zero'} << DB[:old_items].select(:id, name)
# File lib/sequel/dataset/actions.rb, line 22 def <<(arg) insert(arg) self end
Returns the first record matching the conditions. Examples:
DB[:table][:id=>1] # SELECT * FROM table WHERE (id = 1) LIMIT 1 # => {:id=1}
# File lib/sequel/dataset/actions.rb, line 31 def [](*conditions) raise(Error, ARRAY_ACCESS_ERROR_MSG) if (conditions.length == 1 and conditions.first.is_a?(Integer)) or conditions.length == 0 first(*conditions) end
Update all records matching the conditions with the values specified. Returns the number of rows affected.
DB[:table][:id=>1] = {:id=>2} # UPDATE table SET id = 2 WHERE id = 1 # => 1 # number of rows affected
# File lib/sequel/dataset/actions.rb, line 41 def []=(conditions, values) filter(conditions).update(values) end
Returns an array with all records in the dataset. If a block is given, the array is iterated over after all items have been loaded.
DB[:table].all # SELECT * FROM table # => [{:id=>1, ...}, {:id=>2, ...}, ...] # Iterate over all rows in the table DB[:table].all{|row| p row}
# File lib/sequel/dataset/actions.rb, line 53 def all(&block) a = [] each{|r| a << r} post_load(a) a.each(&block) if block a end
Returns the average value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].avg(:number) # SELECT avg(number) FROM table LIMIT 1 # => 3 DB[:table].avg{function(column)} # SELECT avg(function(column)) FROM table LIMIT 1 # => 1
# File lib/sequel/dataset/actions.rb, line 68 def avg(column=Sequel.virtual_row(&Proc.new)) aggregate_dataset.get{avg(column)} end
Returns the columns in the result set in order as an array of symbols. If the columns are currently cached, returns the cached value. Otherwise, a SELECT query is performed to retrieve a single row in order to get the columns.
If you are looking for all columns for a single table and maybe some
information about each column (e.g. database type), see
Database#schema
.
DB[:table].columns # => [:id, :name]
# File lib/sequel/dataset/actions.rb, line 81 def columns return @columns if @columns ds = unfiltered.unordered.clone(:distinct => nil, :limit => 1, :offset=>nil) ds.each{break} @columns = ds.instance_variable_get(:@columns) @columns || [] end
Ignore any cached column information and perform a query to retrieve a row in order to get the columns.
DB[:table].columns! # => [:id, :name]
# File lib/sequel/dataset/actions.rb, line 94 def columns! @columns = nil columns end
Returns the number of records in the dataset. If an argument is provided, it is used as the argument to count. If a block is provided, it is treated as a virtual row, and the result is used as the argument to count.
DB[:table].count # SELECT COUNT(*) AS count FROM table LIMIT 1 # => 3 DB[:table].count(:column) # SELECT COUNT(column) AS count FROM table LIMIT 1 # => 2 DB[:table].count{foo(column)} # SELECT COUNT(foo(column)) AS count FROM table LIMIT 1 # => 1
# File lib/sequel/dataset/actions.rb, line 110 def count(arg=(no_arg=true), &block) if no_arg if block arg = Sequel.virtual_row(&block) aggregate_dataset.get{COUNT(arg).as(count)} else aggregate_dataset.get{COUNT(:*){}.as(count)}.to_i end elsif block raise Error, 'cannot provide both argument and block to Dataset#count' else aggregate_dataset.get{COUNT(arg).as(count)} end end
Deletes the records in the dataset. The returned value should be number of records deleted, but that is adapter dependent.
DB[:table].delete # DELETE * FROM table # => 3
# File lib/sequel/dataset/actions.rb, line 130 def delete(&block) sql = delete_sql if uses_returning?(:delete) returning_fetch_rows(sql, &block) else execute_dui(sql) end end
Iterates over the records in the dataset as they are yielded from the database adapter, and returns self.
DB[:table].each{|row| p row} # SELECT * FROM table
Note that this method is not safe to use on many adapters if you are
running additional queries inside the provided block. If you are running
queries inside the block, you should use all
instead of
each
for the outer queries, or use a separate thread or shard
inside each
:
# File lib/sequel/dataset/actions.rb, line 148 def each if @opts[:graph] graph_each{|r| yield r} elsif row_proc = @row_proc fetch_rows(select_sql){|r| yield row_proc.call(r)} else fetch_rows(select_sql){|r| yield r} end self end
Yields a paginated dataset for each page and returns the receiver. Does a count to find the total number of records for this dataset.
# File lib/sequel/extensions/pagination.rb, line 26 def each_page(page_size) raise(Error, "You cannot paginate a dataset that already has a limit") if @opts[:limit] record_count = count total_pages = (record_count / page_size.to_f).ceil (1..total_pages).each{|page_no| yield paginate(page_no, page_size, record_count)} self end
Returns true if no records exist in the dataset, false otherwise
DB[:table].empty? # SELECT 1 AS one FROM table LIMIT 1 # => false
# File lib/sequel/dataset/actions.rb, line 163 def empty? get(Sequel::SQL::AliasedExpression.new(1, :one)).nil? end
Executes a select query and fetches records, yielding each record to the
supplied block. The yielded records should be hashes with symbol keys.
This method should probably should not be called by user code, use
each
instead.
# File lib/sequel/dataset/actions.rb, line 171 def fetch_rows(sql) raise NotImplemented, NOTIMPL_MSG end
If a integer argument is given, it is interpreted as a limit, and then returns all matching records up to that limit. If no argument is passed, it returns the first matching record. If any other type of argument(s) is passed, it is given to filter and the first matching record is returned. If a block is given, it is used to filter the dataset before returning anything. Examples:
DB[:table].first # SELECT * FROM table LIMIT 1 # => {:id=>7} DB[:table].first(2) # SELECT * FROM table LIMIT 2 # => [{:id=>6}, {:id=>4}] DB[:table].first(:id=>2) # SELECT * FROM table WHERE (id = 2) LIMIT 1 # => {:id=>2} DB[:table].first("id = 3") # SELECT * FROM table WHERE (id = 3) LIMIT 1 # => {:id=>3} DB[:table].first("id = ?", 4) # SELECT * FROM table WHERE (id = 4) LIMIT 1 # => {:id=>4} DB[:table].first{id > 2} # SELECT * FROM table WHERE (id > 2) LIMIT 1 # => {:id=>5} DB[:table].first("id > ?", 4){id < 6} # SELECT * FROM table WHERE ((id > 4) AND (id < 6)) LIMIT 1 # => {:id=>5} DB[:table].first(2){id < 2} # SELECT * FROM table WHERE (id < 2) LIMIT 2 # => [{:id=>1}]
# File lib/sequel/dataset/actions.rb, line 205 def first(*args, &block) ds = block ? filter(&block) : self if args.empty? ds.single_record else args = (args.size == 1) ? args.first : args if Integer === args ds.limit(args).all else ds.filter(args).single_record end end end
Return the column value for the first matching record in the dataset. Raises an error if both an argument and block is given.
DB[:table].get(:id) # SELECT id FROM table LIMIT 1 # => 3 ds.get{sum(id)} # SELECT sum(id) FROM table LIMIT 1 # => 6
You can pass an array of arguments to return multiple arguments, but you must make sure each element in the array has an alias that Sequel can determine:
DB[:table].get([:id, :name]) # SELECT id, name FROM table LIMIT 1 # => [3, 'foo'] DB[:table].get{[sum(id).as(sum), name]} # SELECT sum(id) AS sum, name FROM table LIMIT 1 # => [6, 'foo']
# File lib/sequel/dataset/actions.rb, line 238 def get(column=(no_arg=true; nil), &block) ds = naked if block raise(Error, ARG_BLOCK_ERROR_MSG) unless no_arg ds = ds.select(&block) column = ds.opts[:select] column = nil if column.is_a?(Array) && column.length < 2 else ds = if column.is_a?(Array) ds.select(*column) else ds.select(column) end end if column.is_a?(Array) ds.single_record.values_at(*column.map{|c| hash_key_symbol(c)}) else ds.single_value end end
Inserts multiple records into the associated table. This method can be used to efficiently insert a large number of records into a table in a single query if the database supports it. Inserts are automatically wrapped in a transaction.
This method is called with a columns array and an array of value arrays:
DB[:table].import([:x, :y], [[1, 2], [3, 4]]) # INSERT INTO table (x, y) VALUES (1, 2) # INSERT INTO table (x, y) VALUES (3, 4)
This method also accepts a dataset instead of an array of value arrays:
DB[:table].import([:x, :y], DB[:table2].select(:a, :b)) # INSERT INTO table (x, y) SELECT a, b FROM table2
Options:
Open a new transaction for every given number of records. For example, if you provide a value of 50, will commit after every 50 records.
Set the server/shard to use for the transaction and insert queries.
Same as :commit_every, :commit_every takes precedence.
# File lib/sequel/dataset/actions.rb, line 283 def import(columns, values, opts={}) return @db.transaction{insert(columns, values)} if values.is_a?(Dataset) return if values.empty? raise(Error, IMPORT_ERROR_MSG) if columns.empty? ds = opts[:server] ? server(opts[:server]) : self if slice_size = opts[:commit_every] || opts[:slice] offset = 0 rows = [] while offset < values.length rows << ds._import(columns, values[offset, slice_size], opts) offset += slice_size end rows.flatten else ds._import(columns, values, opts) end end
Inserts values into the associated table. The returned value is generally the value of the primary key for the inserted row, but that is adapter dependent.
insert
handles a number of different argument formats:
Most common format, treats keys as columns an values as values
Treats entries as values, with no columns
Treats first array as columns, second array as values
Treats as an insert based on a selection from the dataset given, with no columns
Treats as an insert based on a selection from the dataset given, with the columns given by the array.
Examples:
DB[:items].insert # INSERT INTO items DEFAULT VALUES DB[:items].insert({}) # INSERT INTO items DEFAULT VALUES DB[:items].insert([1,2,3]) # INSERT INTO items VALUES (1, 2, 3) DB[:items].insert([:a, :b], [1,2]) # INSERT INTO items (a, b) VALUES (1, 2) DB[:items].insert(:a => 1, :b => 2) # INSERT INTO items (a, b) VALUES (1, 2) DB[:items].insert(DB[:old_items]) # INSERT INTO items SELECT * FROM old_items DB[:items].insert([:a, :b], DB[:old_items]) # INSERT INTO items (a, b) SELECT * FROM old_items
# File lib/sequel/dataset/actions.rb, line 338 def insert(*values, &block) sql = insert_sql(*values) if uses_returning?(:insert) returning_fetch_rows(sql, &block) else execute_insert(sql) end end
Inserts multiple values. If a block is given it is invoked for each item in
the given array before inserting it. See multi_insert
as a
possibly faster version that may be able to insert multiple records in one
SQL statement (if supported by the database).
Returns an array of primary keys of inserted rows.
DB[:table].insert_multiple([{:x=>1}, {:x=>2}]) # => [4, 5] # INSERT INTO table (x) VALUES (1) # INSERT INTO table (x) VALUES (2) DB[:table].insert_multiple([{:x=>1}, {:x=>2}]){|row| row[:y] = row[:x] * 2; row } # => [6, 7] # INSERT INTO table (x, y) VALUES (1, 2) # INSERT INTO table (x, y) VALUES (2, 4)
# File lib/sequel/dataset/actions.rb, line 362 def insert_multiple(array, &block) if block array.map{|i| insert(block.call(i))} else array.map{|i| insert(i)} end end
Returns the interval between minimum and maximum values for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].interval(:id) # SELECT (max(id) - min(id)) FROM table LIMIT 1 # => 6 DB[:table].interval{function(column)} # SELECT (max(function(column)) - min(function(column))) FROM table LIMIT 1 # => 7
# File lib/sequel/dataset/actions.rb, line 377 def interval(column=Sequel.virtual_row(&Proc.new)) aggregate_dataset.get{max(column) - min(column)} end
Reverses the order and then runs first with the given arguments and
block. Note that this will not necessarily give you the last record in the
dataset, unless you have an unambiguous order. If there is not currently
an order for this dataset, raises an Error
.
DB[:table].order(:id).last # SELECT * FROM table ORDER BY id DESC LIMIT 1 # => {:id=>10} DB[:table].order(Sequel.desc(:id)).last(2) # SELECT * FROM table ORDER BY id ASC LIMIT 2 # => [{:id=>1}, {:id=>2}]
# File lib/sequel/dataset/actions.rb, line 391 def last(*args, &block) raise(Error, 'No order specified') unless @opts[:order] reverse.first(*args, &block) end
Maps column values for each record in the dataset (if a column name is
given), or performs the stock mapping functionality of
Enumerable
otherwise. Raises an Error
if both an
argument and block are given.
DB[:table].map(:id) # SELECT * FROM table # => [1, 2, 3, ...] DB[:table].map{|r| r[:id] * 2} # SELECT * FROM table # => [2, 4, 6, ...]
You can also provide an array of column names:
DB[:table].map([:id, :name]) # SELECT * FROM table # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
# File lib/sequel/dataset/actions.rb, line 410 def map(column=nil, &block) if column raise(Error, ARG_BLOCK_ERROR_MSG) if block return naked.map(column) if row_proc if column.is_a?(Array) super(){|r| r.values_at(*column)} else super(){|r| r[column]} end else super(&block) end end
Returns the maximum value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].max(:id) # SELECT max(id) FROM table LIMIT 1 # => 10 DB[:table].max{function(column)} # SELECT max(function(column)) FROM table LIMIT 1 # => 7
# File lib/sequel/dataset/actions.rb, line 431 def max(column=Sequel.virtual_row(&Proc.new)) aggregate_dataset.get{max(column)} end
Returns the minimum value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].min(:id) # SELECT min(id) FROM table LIMIT 1 # => 1 DB[:table].min{function(column)} # SELECT min(function(column)) FROM table LIMIT 1 # => 0
# File lib/sequel/dataset/actions.rb, line 442 def min(column=Sequel.virtual_row(&Proc.new)) aggregate_dataset.get{min(column)} end
This is a front end for import that allows you to submit an array of hashes instead of arrays of columns and values:
DB[:table].multi_insert([{:x => 1}, {:x => 2}]) # INSERT INTO table (x) VALUES (1) # INSERT INTO table (x) VALUES (2)
Be aware that all hashes should have the same keys if you use this calling method, otherwise some columns could be missed or set to null instead of to default values.
This respects the same options as import.
# File lib/sequel/dataset/actions.rb, line 458 def multi_insert(hashes, opts={}) return if hashes.empty? columns = hashes.first.keys import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts) end
Return a cloned nullified dataset.
# File lib/sequel/extensions/null_dataset.rb, line 87 def nullify clone.nullify! end
Nullify the current dataset
# File lib/sequel/extensions/null_dataset.rb, line 92 def nullify! extend NullDataset end
Returns a paginated dataset. The returned dataset is limited to the page size at the correct offset, and extended with the Pagination module. If a record count is not provided, does a count of total number of records for this dataset.
# File lib/sequel/extensions/pagination.rb, line 17 def paginate(page_no, page_size, record_count=nil) raise(Error, "You cannot paginate a dataset that already has a limit") if @opts[:limit] paginated = limit(page_size, (page_no - 1) * page_size) paginated.extend(Pagination) paginated.set_pagination_info(page_no, page_size, record_count || count) end
Pretty prints the records in the dataset as plain-text table.
# File lib/sequel/extensions/pretty_table.rb, line 21 def print(*cols) ds = naked rows = ds.all Sequel::PrettyTable.print(rows, cols.empty? ? ds.columns : cols) end
Translates a query block into a dataset. Query blocks can be useful when expressing complex SELECT statements, e.g.:
dataset = DB[:items].query do select :x, :y, :z filter{(x > 1) & (y > 2)} reverse :z end
Which is the same as:
dataset = DB[:items].select(:x, :y, :z).filter{(x > 1) & (y > 2)}.reverse(:z)
Note that inside a call to query, you cannot call each, insert, update, or delete (or any method that calls those), or Sequel will raise an error.
# File lib/sequel/extensions/query.rb, line 36 def query(&block) copy = clone({}) copy.extend(QueryBlockCopy) copy.instance_eval(&block) clone(copy.opts) end
Returns a Range
instance made from the minimum and maximum
values for the given column/expression. Uses a virtual row block if no
argument is given.
DB[:table].range(:id) # SELECT max(id) AS v1, min(id) AS v2 FROM table LIMIT 1 # => 1..10 DB[:table].interval{function(column)} # SELECT max(function(column)) AS v1, min(function(column)) AS v2 FROM table LIMIT 1 # => 0..7
# File lib/sequel/dataset/actions.rb, line 471 def range(column=Sequel.virtual_row(&Proc.new)) if r = aggregate_dataset.select{[min(column).as(v1), max(column).as(v2)]}.first (r[:v1]..r[:v2]) end end
Returns a hash with key_column values as keys and value_column values as values. Similar to #to_hash, but only selects the columns given.
DB[:table].select_hash(:id, :name) # SELECT id, name FROM table # => {1=>'a', 2=>'b', ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].select_hash([:id, :foo], [:name, :bar]) # SELECT * FROM table # {[1, 3]=>['a', 'c'], [2, 4]=>['b', 'd'], ...}
When using this method, you must be sure that each expression has an alias that Sequel can determine. Usually you can do this by calling the as method on the expression and providing an alias.
# File lib/sequel/dataset/actions.rb, line 492 def select_hash(key_column, value_column) _select_hash(:to_hash, key_column, value_column) end
Returns a hash with key_column values as keys and an array of value_column values. Similar to #to_hash_groups, but only selects the columns given.
DB[:table].select_hash(:name, :id) # SELECT id, name FROM table # => {'a'=>[1, 4, ...], 'b'=>[2, ...], ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].select_hash([:first, :middle], [:last, :id]) # SELECT * FROM table # {['a', 'b']=>[['c', 1], ['d', 2], ...], ...}
When using this method, you must be sure that each expression has an alias that Sequel can determine. Usually you can do this by calling the as method on the expression and providing an alias.
# File lib/sequel/dataset/actions.rb, line 511 def select_hash_groups(key_column, value_column) _select_hash(:to_hash_groups, key_column, value_column) end
Selects the column given (either as an argument or as a block), and returns an array of all values of that column in the dataset. If you give a block argument that returns an array with multiple entries, the contents of the resulting array are undefined. Raises an Error if called with both an argument and a block.
DB[:table].select_map(:id) # SELECT id FROM table # => [3, 5, 8, 1, ...] DB[:table].select_map{id * 2} # SELECT (id * 2) FROM table # => [6, 10, 16, 2, ...]
You can also provide an array of column names:
DB[:table].select_map([:id, :name]) # SELECT id, name FROM table # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel can determine. Usually you can do this by calling the as method on the expression and providing an alias.
# File lib/sequel/dataset/actions.rb, line 535 def select_map(column=nil, &block) _select_map(column, false, &block) end
The same as #select_map, but in addition orders the array by the column.
DB[:table].select_order_map(:id) # SELECT id FROM table ORDER BY id # => [1, 2, 3, 4, ...] DB[:table].select_order_map{id * 2} # SELECT (id * 2) FROM table ORDER BY (id * 2) # => [2, 4, 6, 8, ...]
You can also provide an array of column names:
DB[:table].select_order_map([:id, :name]) # SELECT id, name FROM table ORDER BY id, name # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel can determine. Usually you can do this by calling the as method on the expression and providing an alias.
# File lib/sequel/dataset/actions.rb, line 555 def select_order_map(column=nil, &block) _select_map(column, true, &block) end
Remove columns from the list of selected columns. If any of the currently selected columns use expressions/aliases, this will remove selected columns with the given aliases. It will also remove entries from the selection that match exactly:
# Assume columns a, b, and c in items table DB[:items] # SELECT * FROM items DB[:items].select_remove(:c) # SELECT a, b FROM items DB[:items].select(:a, :b___c, :c___b).select_remove(:c) # SELECT a, c AS b FROM items DB[:items].select(:a, :b___c, :c___b).select_remove(:c___b) # SELECT a, b AS c FROM items
Note that there are a few cases where this method may not work correctly:
This dataset joins multiple tables and does not have an existing explicit selection. In this case, the code will currently use unqualified column names for all columns the dataset returns, except for the columns given.
This dataset has an existing explicit selection containing an item that returns multiple database columns (e.g. Sequel.expr(:table).*, Sequel.lit('column1, column2')). In this case, the behavior is undefined and this method should not be used.
There may be other cases where this method does not work correctly, use it with caution.
# File lib/sequel/extensions/select_remove.rb, line 31 def select_remove(*cols) if (sel = @opts[:select]) && !sel.empty? select(*(columns.zip(sel).reject{|c, s| cols.include?(c)}.map{|c, s| s} - cols)) else select(*(columns - cols)) end end
Alias for update, but not aliased directly so subclasses don't have to override both methods.
# File lib/sequel/dataset/actions.rb, line 561 def set(*args) update(*args) end
Returns the first record in the dataset, or nil if the dataset has no
records. Users should probably use first
instead of this
method.
# File lib/sequel/dataset/actions.rb, line 568 def single_record clone(:limit=>1).each{|r| return r} nil end
Returns the first value of the first record in the dataset. Returns nil if
dataset is empty. Users should generally use get
instead of
this method.
# File lib/sequel/dataset/actions.rb, line 576 def single_value if r = naked.ungraphed.single_record r.values.first end end
Returns the sum for the given column/expression. Uses a virtual row block if no column is given.
DB[:table].sum(:id) # SELECT sum(id) FROM table LIMIT 1 # => 55 DB[:table].sum{function(column)} # SELECT sum(function(column)) FROM table LIMIT 1 # => 10
# File lib/sequel/dataset/actions.rb, line 589 def sum(column=Sequel.virtual_row(&Proc.new)) aggregate_dataset.get{sum(column)} end
Returns a string in CSV format containing the dataset records. By default the CSV representation includes the column titles in the first line. You can turn that off by passing false as the include_column_titles argument.
This does not use a CSV library or handle quoting of values in any way. If any values in any of the rows could include commas or line endings, you shouldn't use this.
puts DB[:table].to_csv # SELECT * FROM table # id,name # 1,Jim # 2,Bob
# File lib/sequel/dataset/actions.rb, line 606 def to_csv(include_column_titles = true) n = naked cols = n.columns csv = '' csv << "#{cols.join(COMMA_SEPARATOR)}\r\n" if include_column_titles n.each{|r| csv << "#{cols.collect{|c| r[c]}.join(COMMA_SEPARATOR)}\r\n"} csv end
Return a string that can be processed by the dot
program
(included with graphviz) in order to see a visualization of the dataset's
abstract syntax tree.
# File lib/sequel/extensions/to_dot.rb, line 149 def to_dot ToDot.output(self) end
Returns a hash with one column used as key and another used as value. If rows have duplicate values for the key column, the latter row(s) will overwrite the value of the previous row(s). If the value_column is not given or nil, uses the entire hash as the value.
DB[:table].to_hash(:id, :name) # SELECT * FROM table # {1=>'Jim', 2=>'Bob', ...} DB[:table].to_hash(:id) # SELECT * FROM table # {1=>{:id=>1, :name=>'Jim'}, 2=>{:id=>2, :name=>'Bob'}, ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].to_hash([:id, :foo], [:name, :bar]) # SELECT * FROM table # {[1, 3]=>['Jim', 'bo'], [2, 4]=>['Bob', 'be'], ...} DB[:table].to_hash([:id, :name]) # SELECT * FROM table # {[1, 'Jim']=>{:id=>1, :name=>'Jim'}, [2, 'Bob'=>{:id=>2, :name=>'Bob'}, ...}
# File lib/sequel/dataset/actions.rb, line 634 def to_hash(key_column, value_column = nil) h = {} if value_column return naked.to_hash(key_column, value_column) if row_proc if value_column.is_a?(Array) if key_column.is_a?(Array) each{|r| h[r.values_at(*key_column)] = r.values_at(*value_column)} else each{|r| h[r[key_column]] = r.values_at(*value_column)} end else if key_column.is_a?(Array) each{|r| h[r.values_at(*key_column)] = r[value_column]} else each{|r| h[r[key_column]] = r[value_column]} end end elsif key_column.is_a?(Array) each{|r| h[r.values_at(*key_column)] = r} else each{|r| h[r[key_column]] = r} end h end
Returns a hash with one column used as key and the values being an array of column values. If the value_column is not given or nil, uses the entire hash as the value.
DB[:table].to_hash(:name, :id) # SELECT * FROM table # {'Jim'=>[1, 4, 16, ...], 'Bob'=>[2], ...} DB[:table].to_hash(:name) # SELECT * FROM table # {'Jim'=>[{:id=>1, :name=>'Jim'}, {:id=>4, :name=>'Jim'}, ...], 'Bob'=>[{:id=>2, :name=>'Bob'}], ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].to_hash([:first, :middle], [:last, :id]) # SELECT * FROM table # {['Jim', 'Bob']=>[['Smith', 1], ['Jackson', 4], ...], ...} DB[:table].to_hash([:first, :middle]) # SELECT * FROM table # {['Jim', 'Bob']=>[{:id=>1, :first=>'Jim', :middle=>'Bob', :last=>'Smith'}, ...], ...}
# File lib/sequel/dataset/actions.rb, line 677 def to_hash_groups(key_column, value_column = nil) h = {} if value_column return naked.to_hash_groups(key_column, value_column) if row_proc if value_column.is_a?(Array) if key_column.is_a?(Array) each{|r| (h[r.values_at(*key_column)] ||= []) << r.values_at(*value_column)} else each{|r| (h[r[key_column]] ||= []) << r.values_at(*value_column)} end else if key_column.is_a?(Array) each{|r| (h[r.values_at(*key_column)] ||= []) << r[value_column]} else each{|r| (h[r[key_column]] ||= []) << r[value_column]} end end elsif key_column.is_a?(Array) each{|r| (h[r.values_at(*key_column)] ||= []) << r} else each{|r| (h[r[key_column]] ||= []) << r} end h end
Truncates the dataset. Returns nil.
DB[:table].truncate # TRUNCATE table # => nil
# File lib/sequel/dataset/actions.rb, line 706 def truncate execute_ddl(truncate_sql) end
Updates values for the dataset. The returned value is generally the number
of rows updated, but that is adapter dependent. values
should
a hash where the keys are columns to set and values are the values to which
to set the columns.
DB[:table].update(:x=>nil) # UPDATE table SET x = NULL # => 10 DB[:table].update(:x=>:x+1, :y=>0) # UPDATE table SET x = (x + 1), y = 0 # => 10
# File lib/sequel/dataset/actions.rb, line 720 def update(values={}, &block) sql = update_sql(values) if uses_returning?(:update) returning_fetch_rows(sql, &block) else execute_dui(sql) end end
Execute the given SQL and return the number of rows deleted. This exists solely as an optimization, replacing #with_sql(sql).delete. It's significantly faster as it does not require cloning the current dataset.
# File lib/sequel/dataset/actions.rb, line 732 def with_sql_delete(sql) execute_dui(sql) end
Internals of import. If primary key values are requested, use separate insert commands for each row. Otherwise, call multi_insert_sql and execute each statement it gives separately.
# File lib/sequel/dataset/actions.rb, line 741 def _import(columns, values, opts) trans_opts = opts.merge(:server=>@opts[:server]) if opts[:return] == :primary_key @db.transaction(trans_opts){values.map{|v| insert(columns, v)}} else stmts = multi_insert_sql(columns, values) @db.transaction(trans_opts){stmts.each{|st| execute_dui(st)}} end end
Return an array of arrays of values given by the symbols in ret_cols.
# File lib/sequel/dataset/actions.rb, line 752 def _select_map_multiple(ret_cols) map{|r| r.values_at(*ret_cols)} end
Returns an array of the first value in each row.
# File lib/sequel/dataset/actions.rb, line 757 def _select_map_single map{|r| r.values.first} end
These are methods you can call to see what SQL will be generated by the dataset.
Returns an EXISTS clause for the dataset
as a LiteralString
.
DB.select(1).where(DB[:items].exists) # SELECT 1 WHERE (EXISTS (SELECT * FROM items))
# File lib/sequel/dataset/sql.rb, line 22 def exists SQL::PlaceholderLiteralString.new(EXISTS, [self], true) end
Returns an INSERT SQL query string. See insert
.
DB[:items].insert_sql(:a=>1) # => "INSERT INTO items (a) VALUES (1)"
# File lib/sequel/dataset/sql.rb, line 30 def insert_sql(*values) return static_sql(@opts[:sql]) if @opts[:sql] check_modification_allowed! columns = [] case values.size when 0 return insert_sql({}) when 1 case vals = values.at(0) when Hash vals = @opts[:defaults].merge(vals) if @opts[:defaults] vals = vals.merge(@opts[:overrides]) if @opts[:overrides] values = [] vals.each do |k,v| columns << k values << v end when Dataset, Array, LiteralString values = vals end when 2 if (v0 = values.at(0)).is_a?(Array) && ((v1 = values.at(1)).is_a?(Array) || v1.is_a?(Dataset) || v1.is_a?(LiteralString)) columns, values = v0, v1 raise(Error, "Different number of values and columns given to insert_sql") if values.is_a?(Array) and columns.length != values.length end end if values.is_a?(Array) && values.empty? && !insert_supports_empty_values? columns = [columns().last] values = [DEFAULT] end clone(:columns=>columns, :values=>values)._insert_sql end
Returns a literal representation of a value to be used as part of an SQL expression.
DB[:items].literal("abc'def\\") #=> "'abc''def\\\\'" DB[:items].literal(:items__id) #=> "items.id" DB[:items].literal([1, 2, 3]) => "(1, 2, 3)" DB[:items].literal(DB[:items]) => "(SELECT * FROM items)" DB[:items].literal(:x + 1 > :y) => "((x + 1) > y)"
If an unsupported object is given, an Error
is raised.
# File lib/sequel/dataset/sql.rb, line 77 def literal_append(sql, v) case v when Symbol literal_symbol_append(sql, v) when String case v when LiteralString sql << v when SQL::Blob literal_blob_append(sql, v) else literal_string_append(sql, v) end when Integer sql << literal_integer(v) when Hash literal_hash_append(sql, v) when SQL::Expression literal_expression_append(sql, v) when Float sql << literal_float(v) when BigDecimal sql << literal_big_decimal(v) when NilClass sql << literal_nil when TrueClass sql << literal_true when FalseClass sql << literal_false when Array literal_array_append(sql, v) when Time sql << (v.is_a?(SQLTime) ? literal_sqltime(v) : literal_time(v)) when DateTime sql << literal_datetime(v) when Date sql << literal_date(v) when Dataset literal_dataset_append(sql, v) else literal_other_append(sql, v) end end
Returns an array of insert statements for inserting multiple records. This
method is used by multi_insert
to format insert statements and
expects a keys array and and an array of value arrays.
This method should be overridden by descendants if the support inserting multiple records in a single SQL statement.
# File lib/sequel/dataset/sql.rb, line 127 def multi_insert_sql(columns, values) values.map{|r| insert_sql(columns, r)} end
Same as select_sql
, not aliased directly to make subclassing
simpler.
# File lib/sequel/dataset/sql.rb, line 140 def sql select_sql end
Returns a TRUNCATE SQL query string. See
truncate
DB[:items].truncate_sql # => 'TRUNCATE items'
# File lib/sequel/dataset/sql.rb, line 147 def truncate_sql if opts[:sql] static_sql(opts[:sql]) else check_truncation_allowed! raise(InvalidOperation, "Can't truncate filtered datasets") if opts[:where] || opts[:having] _truncate_sql(source_list(opts[:from])) end end
Formats an UPDATE statement using the
given values. See update
.
DB[:items].update_sql(:price => 100, :category => 'software') # => "UPDATE items SET price = 100, category = 'software'
Raises an Error
if the dataset is grouped or includes more
than one table.
# File lib/sequel/dataset/sql.rb, line 164 def update_sql(values = {}) return static_sql(opts[:sql]) if opts[:sql] check_modification_allowed! clone(:values=>values)._update_sql end
These methods all return booleans, with most describing whether or not the dataset supports a feature.
Whether this dataset will provide accurate number of rows matched for delete and update statements. Accurate in this case is the number of rows matched by the dataset's filter.
# File lib/sequel/dataset/features.rb, line 23 def provides_accurate_rows_matched? true end
Whether this dataset quotes identifiers.
# File lib/sequel/dataset/features.rb, line 10 def quote_identifiers? if defined?(@quote_identifiers) @quote_identifiers elsif db.respond_to?(:quote_identifiers?) @quote_identifiers = db.quote_identifiers? else @quote_identifiers = false end end
Whether you must use a column alias list for recursive CTEs (false by default).
# File lib/sequel/dataset/features.rb, line 29 def recursive_cte_requires_column_aliases? false end
Whether type specifiers are required for prepared statement/bound variable argument placeholders (i.e. :bv__integer)
# File lib/sequel/dataset/features.rb, line 41 def requires_placeholder_type_specifiers? false end
Whether the dataset requires SQL standard datetimes (false by default, as most allow strings with ISO 8601 format).
# File lib/sequel/dataset/features.rb, line 35 def requires_sql_standard_datetimes? false end
Whether the dataset supports common table expressions (the WITH clause). If
given, type
can be :select, :insert, :update, or :delete, in
which case it determines whether WITH is supported for the respective
statement type.
# File lib/sequel/dataset/features.rb, line 48 def supports_cte?(type=:select) send(:"#{type}_clause_methods").include?(:"#{type}_with_sql") end
Whether the dataset supports common table expressions (the WITH clause) in subqueries. If false, applies the WITH clause to the main query, which can cause issues if multiple WITH clauses use the same name.
# File lib/sequel/dataset/features.rb, line 55 def supports_cte_in_subqueries? false end
Whether the dataset supports CUBE with GROUP BY.
# File lib/sequel/dataset/features.rb, line 65 def supports_group_cube? false end
Whether the dataset supports ROLLUP with GROUP BY.
# File lib/sequel/dataset/features.rb, line 70 def supports_group_rollup? false end
Whether this dataset supports the insert_select
method for
returning all columns values directly from an insert query.
# File lib/sequel/dataset/features.rb, line 76 def supports_insert_select? supports_returning?(:insert) end
Whether the dataset supports the INTERSECT and EXCEPT compound operations, true by default.
# File lib/sequel/dataset/features.rb, line 81 def supports_intersect_except? true end
Whether the dataset supports the IS TRUE syntax.
# File lib/sequel/dataset/features.rb, line 91 def supports_is_true? true end
Whether the dataset supports the JOIN table USING (column1, ...) syntax.
# File lib/sequel/dataset/features.rb, line 96 def supports_join_using? true end
Whether modifying joined datasets is supported.
# File lib/sequel/dataset/features.rb, line 101 def supports_modifying_joins? false end
Whether the IN/NOT IN operators support multiple columns when an array of values is given.
# File lib/sequel/dataset/features.rb, line 107 def supports_multiple_column_in? true end
Whether the dataset supports pattern matching by regular expressions.
# File lib/sequel/dataset/features.rb, line 118 def supports_regexp? false end
Whether the RETURNING clause is
supported for the given type of query. type
can be :insert,
:update, or :delete.
# File lib/sequel/dataset/features.rb, line 124 def supports_returning?(type) send(:"#{type}_clause_methods").include?(:"#{type}_returning_sql") end
Whether the dataset supports timezones in literal timestamps
# File lib/sequel/dataset/features.rb, line 134 def supports_timestamp_timezones? false end
Whether the dataset supports fractional seconds in literal timestamps
# File lib/sequel/dataset/features.rb, line 139 def supports_timestamp_usecs? true end
Whether the dataset supports window functions.
# File lib/sequel/dataset/features.rb, line 144 def supports_window_functions? false end
These methods don't fit cleanly into another section.
The database related to this dataset. This is the Database instance that will execute all of this dataset's queries.
The hash of options for this dataset, keys are symbols.
Constructs a new Dataset instance with an associated database and options. Datasets are usually constructed by invoking the Database#[] method:
DB[:posts]
Sequel::Dataset is an abstract class that is not useful by itself. Each database adapter provides a subclass of Sequel::Dataset, and has the Sequel::Database#dataset method return an instance of that subclass.
# File lib/sequel/dataset/misc.rb, line 28 def initialize(db, opts = nil) @db = db @opts = opts || {} end
Define a hash value such that datasets with the same DB, opts, and SQL will be considered equal.
# File lib/sequel/dataset/misc.rb, line 35 def ==(o) o.is_a?(self.class) && db == o.db && opts == o.opts && sql == o.sql end
Yield a dataset for each server in the connection pool that is tied to that server. Intended for use in sharded environments where all servers need to be modified with the same data:
DB[:configs].where(:key=>'setting').each_server{|ds| ds.update(:value=>'new_value')}
# File lib/sequel/dataset/misc.rb, line 49 def each_server db.servers.each{|s| yield server(s)} end
Alias for ==
# File lib/sequel/dataset/misc.rb, line 40 def eql?(o) self == o end
Alias of first_source_alias
# File lib/sequel/dataset/misc.rb, line 54 def first_source first_source_alias end
The first source (primary table) for this dataset. If the dataset doesn't
have a table, raises an Error
. If the table is aliased,
returns the aliased name.
DB[:table].first_source_alias # => :table DB[:table___t].first_source_alias # => :t
# File lib/sequel/dataset/misc.rb, line 66 def first_source_alias source = @opts[:from] if source.nil? || source.empty? raise Error, 'No source specified for query' end case s = source.first when SQL::AliasedExpression s.aliaz when Symbol sch, table, aliaz = split_symbol(s) aliaz ? aliaz.to_sym : s else s end end
The first source (primary table) for this dataset. If the dataset doesn't have a table, raises an error. If the table is aliased, returns the original table, not the alias
DB[:table].first_source_table # => :table DB[:table___t].first_source_table # => :table
# File lib/sequel/dataset/misc.rb, line 91 def first_source_table source = @opts[:from] if source.nil? || source.empty? raise Error, 'No source specified for query' end case s = source.first when SQL::AliasedExpression s.expression when Symbol sch, table, aliaz = split_symbol(s) aliaz ? (sch ? SQL::QualifiedIdentifier.new(sch, table) : table.to_sym) : s else s end end
Define a hash value such that datasets with the same DB, opts, and SQL will have the same hash value
# File lib/sequel/dataset/misc.rb, line 109 def hash [db, opts, sql].hash end
The String instance method to call on identifiers before sending them to the database.
# File lib/sequel/dataset/misc.rb, line 115 def identifier_input_method if defined?(@identifier_input_method) @identifier_input_method elsif db.respond_to?(:identifier_input_method) @identifier_input_method = db.identifier_input_method else @identifier_input_method = nil end end
The String instance method to call on identifiers before sending them to the database.
# File lib/sequel/dataset/misc.rb, line 127 def identifier_output_method if defined?(@identifier_output_method) @identifier_output_method elsif db.respond_to?(:identifier_output_method) @identifier_output_method = db.identifier_output_method else @identifier_output_method = nil end end
Returns a string representation of the dataset including the class name and the corresponding SQL select statement.
# File lib/sequel/dataset/misc.rb, line 139 def inspect c = self.class c = c.superclass while c.name.nil? || c.name == '' "#<#{c.name}: #{sql.inspect}>" end
The alias to use for the row_number column, used when emulating OFFSET support and for eager limit strategies
# File lib/sequel/dataset/misc.rb, line 147 def row_number_column :x_sequel_row_number_x end
Splits a possible implicit alias in c
, handling both
SQL::AliasedExpressions and Symbols. Returns an array of two elements,
with the first being the main expression, and the second being the alias.
# File lib/sequel/dataset/misc.rb, line 154 def split_alias(c) case c when Symbol c_table, column, aliaz = split_symbol(c) [c_table ? SQL::QualifiedIdentifier.new(c_table, column.to_sym) : column.to_sym, aliaz] when SQL::AliasedExpression [c.expression, c.aliaz] when SQL::JoinClause [c.table, c.table_alias] else [c, nil] end end
Creates a unique table alias that hasn't already been used in the dataset. table_alias can be any type of object accepted by alias_symbol. The symbol returned will be the implicit alias in the argument, possibly appended with "_N" if the implicit alias has already been used, where N is an integer starting at 0 and increasing until an unused one is found.
You can provide a second addition array argument containing symbols that should not be considered valid table aliases. The current aliases for the FROM and JOIN tables are automatically included in this array.
DB[:table].unused_table_alias(:t) # => :t DB[:table].unused_table_alias(:table) # => :table_0 DB[:table, :table_0].unused_table_alias(:table) # => :table_1 DB[:table, :table_0].unused_table_alias(:table, [:table_1, :table_2]) # => :table_3
# File lib/sequel/dataset/misc.rb, line 190 def unused_table_alias(table_alias, used_aliases = []) table_alias = alias_symbol(table_alias) used_aliases += opts[:from].map{|t| alias_symbol(t)} if opts[:from] used_aliases += opts[:join].map{|j| j.table_alias ? alias_alias_symbol(j.table_alias) : alias_symbol(j.table)} if opts[:join] if used_aliases.include?(table_alias) i = 0 loop do ta = :"#{table_alias}_#{i}" return ta unless used_aliases.include?(ta) i += 1 end else table_alias end end
These methods modify the receiving dataset and should be used with care.
All methods that should have a ! method added that modifies the receiver.
Set the method to call on identifiers going into the database for this dataset
Set the method to call on identifiers coming the database for this dataset
Whether to quote identifiers for this dataset
Setup mutation (e.g. filter!) methods. These operate the same as the non-! methods, but replace the options of the current dataset with the options of the resulting dataset.
# File lib/sequel/dataset/mutation.rb, line 14 def self.def_mutation_method(*meths) meths.each do |meth| class_eval("def #{meth}!(*args, &block); mutation_method(:#{meth}, *args, &block) end", __FILE__, __LINE__) end end
Add a mutation method to this dataset instance.
# File lib/sequel/dataset/mutation.rb, line 37 def def_mutation_method(*meths) meths.each do |meth| instance_eval("def #{meth}!(*args, &block); mutation_method(:#{meth}, *args, &block) end", __FILE__, __LINE__) end end
Remove the #row_proc from the current dataset.
# File lib/sequel/dataset/mutation.rb, line 44 def naked! self.row_proc = nil self end
These methods, while public, are not designed to be used directly by the end user.
Map of emulated function names to native function names.
Given a type (e.g. select) and an array of clauses, return an array of methods to call to build the SQL string.
# File lib/sequel/dataset/sql.rb, line 177 def self.clause_methods(type, clauses) clauses.map{|clause| :"#{type}_#{clause}_sql"}.freeze end
# File lib/sequel/dataset/sql.rb, line 326 def self.def_append_methods(meths) meths.each do |meth| class_eval(" def #{meth}(*args, &block) s = '' #{meth}_append(s, *args, &block) s end ", __FILE__, __LINE__ + 1) end end
SQL fragment for AliasedExpression
# File lib/sequel/dataset/sql.rb, line 341 def aliased_expression_sql_append(sql, ae) literal_append(sql, ae.expression) as_sql_append(sql, ae.aliaz) end
SQL fragment for BooleanConstants
# File lib/sequel/dataset/sql.rb, line 358 def boolean_constant_sql_append(sql, constant) if (constant == true || constant == false) && !supports_where_true? sql << (constant == true ? CONDITION_TRUE : CONDITION_FALSE) else literal_append(sql, constant) end end
SQL fragment for CaseExpression
# File lib/sequel/dataset/sql.rb, line 367 def case_expression_sql_append(sql, ce) sql << CASE_OPEN if ce.expression? sql << SPACE literal_append(sql, ce.expression) end w = CASE_WHEN t = CASE_THEN ce.conditions.each do |c,r| sql << w literal_append(sql, c) sql << t literal_append(sql, r) end sql << CASE_ELSE literal_append(sql, ce.default) sql << CASE_END end
SQL fragment for specifying all columns in a given table
# File lib/sequel/dataset/sql.rb, line 395 def column_all_sql_append(sql, ca) qualified_identifier_sql_append(sql, ca.table, WILDCARD) end
SQL fragment for the complex expression.
# File lib/sequel/dataset/sql.rb, line 400 def complex_expression_sql_append(sql, op, args) case op when *IS_OPERATORS r = args.at(1) if r.nil? || supports_is_true? raise(InvalidOperation, 'Invalid argument used for IS operator') unless v = IS_LITERALS[r] sql << PAREN_OPEN literal_append(sql, args.at(0)) sql << SPACE << op.to_s << SPACE sql << v << PAREN_CLOSE elsif op == :IS complex_expression_sql_append(sql, :"=", args) else complex_expression_sql_append(sql, :OR, [SQL::BooleanExpression.new(:"!=", *args), SQL::BooleanExpression.new(:IS, args.at(0), nil)]) end when :IN, :"NOT IN" cols = args.at(0) vals = args.at(1) col_array = true if cols.is_a?(Array) if vals.is_a?(Array) val_array = true empty_val_array = vals == [] end if empty_val_array literal_append(sql, empty_array_value(op, cols)) elsif col_array if !supports_multiple_column_in? if val_array expr = SQL::BooleanExpression.new(:OR, *vals.to_a.map{|vs| SQL::BooleanExpression.from_value_pairs(cols.to_a.zip(vs).map{|c, v| [c, v]})}) literal_append(sql, op == :IN ? expr : ~expr) else old_vals = vals vals = vals.naked if vals.is_a?(Sequel::Dataset) vals = vals.to_a val_cols = old_vals.columns complex_expression_sql_append(sql, op, [cols, vals.map!{|x| x.values_at(*val_cols)}]) end else # If the columns and values are both arrays, use array_sql instead of # literal so that if values is an array of two element arrays, it # will be treated as a value list instead of a condition specifier. sql << PAREN_OPEN literal_append(sql, cols) sql << SPACE << op.to_s << SPACE if val_array array_sql_append(sql, vals) else literal_append(sql, vals) end sql << PAREN_CLOSE end else sql << PAREN_OPEN literal_append(sql, cols) sql << SPACE << op.to_s << SPACE literal_append(sql, vals) sql << PAREN_CLOSE end when *TWO_ARITY_OPERATORS if REGEXP_OPERATORS.include?(op) && !supports_regexp? raise InvalidOperation, "Pattern matching via regular expressions is not supported on #{db.database_type}" end sql << PAREN_OPEN literal_append(sql, args.at(0)) sql << SPACE << op.to_s << SPACE literal_append(sql, args.at(1)) sql << PAREN_CLOSE when *N_ARITY_OPERATORS sql << PAREN_OPEN c = false op_str = " #{op} " args.each do |a| sql << op_str if c literal_append(sql, a) c ||= true end sql << PAREN_CLOSE when :NOT sql << NOT_SPACE literal_append(sql, args.at(0)) when :NOOP literal_append(sql, args.at(0)) when :'B~' sql << TILDE literal_append(sql, args.at(0)) when :extract sql << EXTRACT << args.at(0).to_s << FROM literal_append(sql, args.at(1)) sql << PAREN_CLOSE else raise(InvalidOperation, "invalid operator #{op}") end end
SQL fragment for constants
# File lib/sequel/dataset/sql.rb, line 495 def constant_sql_append(sql, constant) sql << constant.to_s end
SQL fragment for delayed evaluations, evaluating the object and literalizing the returned value.
# File lib/sequel/dataset/sql.rb, line 501 def delayed_evaluation_sql_append(sql, callable) literal_append(sql, callable.call) end
SQL fragment specifying an emulated SQL function call. By default, assumes just the function name may need to be emulated, adapters should set an EMULATED_FUNCTION_MAP hash mapping emulated functions to native functions in their dataset class to setup the emulation.
# File lib/sequel/dataset/sql.rb, line 510 def emulated_function_sql_append(sql, f) _function_sql_append(sql, native_function_name(f.f), f.args) end
SQL fragment specifying a JOIN clause without ON or USING.
# File lib/sequel/dataset/sql.rb, line 520 def join_clause_sql_append(sql, jc) table = jc.table table_alias = jc.table_alias table_alias = nil if table == table_alias sql << SPACE << join_type_sql(jc.join_type) << SPACE identifier_append(sql, table) as_sql_append(sql, table_alias) if table_alias end
SQL fragment for NegativeBooleanConstants
# File lib/sequel/dataset/sql.rb, line 545 def negative_boolean_constant_sql_append(sql, constant) sql << NOT_SPACE boolean_constant_sql_append(sql, constant) end
SQL fragment for the ordered expression, used in the ORDER BY clause.
# File lib/sequel/dataset/sql.rb, line 552 def ordered_expression_sql_append(sql, oe) literal_append(sql, oe.expression) sql << (oe.descending ? DESC : ASC) case oe.nulls when :first sql << NULLS_FIRST when :last sql << NULLS_LAST end end
SQL fragment for a literal string with placeholders
# File lib/sequel/dataset/sql.rb, line 564 def placeholder_literal_string_sql_append(sql, pls) args = pls.args str = pls.str sql << PAREN_OPEN if pls.parens if args.is_a?(Hash) re = %r:(#{args.keys.map{|k| Regexp.escape(k.to_s)}.join('|')})\b/ loop do previous, q, str = str.partition(re) sql << previous literal_append(sql, args[($1||q[1..-1].to_s).to_sym]) unless q.empty? break if str.empty? end elsif str.is_a?(Array) len = args.length str.each_with_index do |s, i| sql << s literal_append(sql, args[i]) unless i == len end else i = -1 loop do previous, q, str = str.partition(QUESTION_MARK) sql << previous literal_append(sql, args.at(i+=1)) unless q.empty? break if str.empty? end end sql << PAREN_CLOSE if pls.parens end
SQL fragment for the qualifed identifier, specifying a table and a column (or schema and table). If 3 arguments are given, the 2nd should be the table/qualifier and the third should be column/qualified. If 2 arguments are given, the 2nd should be an SQL::QualifiedIdentifier.
# File lib/sequel/dataset/sql.rb, line 598 def qualified_identifier_sql_append(sql, table, column=(c = table.column; table = table.table; c)) identifier_append(sql, table) sql << DOT identifier_append(sql, column) end
Adds quoting to identifiers (columns and tables). If identifiers are not being quoted, returns name as a string. If identifiers are being quoted quote the name with quoted_identifier.
# File lib/sequel/dataset/sql.rb, line 607 def quote_identifier_append(sql, name) if name.is_a?(LiteralString) sql << name else name = name.value if name.is_a?(SQL::Identifier) name = input_identifier(name) if quote_identifiers? quoted_identifier_append(sql, name) else sql << name end end end
Separates the schema from the table and returns a string with them quoted (if quoting identifiers)
# File lib/sequel/dataset/sql.rb, line 623 def quote_schema_table_append(sql, table) schema, table = schema_and_table(table) if schema quote_identifier_append(sql, schema) sql << DOT end quote_identifier_append(sql, table) end
This method quotes the given name with the SQL standard double quote. should be overridden by subclasses to provide quoting not matching the SQL standard, such as backtick (used by MySQL and SQLite).
# File lib/sequel/dataset/sql.rb, line 635 def quoted_identifier_append(sql, name) sql << QUOTE << name.to_s.gsub(QUOTE_RE, DOUBLE_QUOTE) << QUOTE end
Split the schema information from the table, returning two strings, one for the schema and one for the table. The returned schema may be nil, but the table will always have a string value.
Note that this function does not handle tables with more than one level of qualification (e.g. database.schema.table on Microsoft SQL Server).
# File lib/sequel/dataset/sql.rb, line 646 def schema_and_table(table_name, sch=(db.default_schema if db)) sch = sch.to_s if sch case table_name when Symbol s, t, a = split_symbol(table_name) [s||sch, t] when SQL::QualifiedIdentifier [table_name.table.to_s, table_name.column.to_s] when SQL::Identifier [sch, table_name.value.to_s] when String [sch, table_name.to_s] else raise Error, 'table_name should be a Symbol, SQL::QualifiedIdentifier, SQL::Identifier, or String' end end
Splits table_name into an array of strings.
ds.split_qualifiers(:s) # ['s'] ds.split_qualifiers(:t__s) # ['t', 's'] ds.split_qualifiers(Sequel.qualify(:d, :t__s)) # ['d', 't', 's'] ds.split_qualifiers(Sequel.qualify(:h__d, :t__s)) # ['h', 'd', 't', 's']
# File lib/sequel/dataset/sql.rb, line 669 def split_qualifiers(table_name, *args) case table_name when SQL::QualifiedIdentifier split_qualifiers(table_name.table, nil) + split_qualifiers(table_name.column, nil) else sch, table = schema_and_table(table_name, *args) sch ? [sch, table] : [table] end end
The SQL fragment for the given window function's function and window.
# File lib/sequel/dataset/sql.rb, line 729 def window_function_sql_append(sql, function, window) literal_append(sql, function) sql << OVER literal_append(sql, window) end
The SQL fragment for the given window's options.
# File lib/sequel/dataset/sql.rb, line 688 def window_sql_append(sql, opts) raise(Error, 'This dataset does not support window functions') unless supports_window_functions? sql << PAREN_OPEN window, part, order, frame = opts.values_at(:window, :partition, :order, :frame) space = false space_s = SPACE if window literal_append(sql, window) space = true end if part sql << space_s if space sql << PARTITION_BY expression_list_append(sql, Array(part)) space = true end if order sql << space_s if space sql << ORDER_BY_NS expression_list_append(sql, Array(order)) space = true end case frame when nil # nothing when :all sql << space_s if space sql << FRAME_ALL when :rows sql << space_s if space sql << FRAME_ROWS when String sql << space_s if space sql << frame else raise Error, "invalid window frame clause, should be :all, :rows, a string, or nil" end sql << PAREN_CLOSE end
Formats in INSERT statement using the stored columns and values.
# File lib/sequel/dataset/sql.rb, line 738 def _insert_sql clause_sql(:insert) end
Formats an UPDATE statement using the stored values.
# File lib/sequel/dataset/sql.rb, line 743 def _update_sql clause_sql(:update) end
Return a #from_self dataset if an order or limit is specified, so it works as expected with UNION, EXCEPT, and INTERSECT clauses.
# File lib/sequel/dataset/sql.rb, line 749 def compound_from_self (@opts[:limit] || @opts[:order]) ? from_self : self end