-
Notifications
You must be signed in to change notification settings - Fork 1
Bump sequel_pg from 1.17.2 to 1.18.2 #842
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
dependabot
wants to merge
1
commit into
main
Choose a base branch
from
dependabot/bundler/sequel_pg-1.18.2
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Contributor
Contributor
gem compare bigdecimal 3.2.2 4.0.1Compared versions: ["3.2.2", "4.0.1"]
DIFFERENT require_paths:
3.2.2: ["/opt/hostedtoolcache/Ruby/3.4.8/x64/lib/ruby/gems/3.4.0/extensions/x86_64-linux/3.4.0/bigdecimal-3.2.2", "lib"]
4.0.1: ["/opt/hostedtoolcache/Ruby/3.4.8/x64/lib/ruby/gems/3.4.0/extensions/x86_64-linux/3.4.0/bigdecimal-4.0.1", "lib"]
DIFFERENT rubygems_version:
3.2.2: 3.6.7
4.0.1: 3.6.9
DIFFERENT version:
3.2.2: 3.2.2
4.0.1: 4.0.1
DIFFERENT files:
3.2.2->4.0.1:
* Changed:
ext/bigdecimal/bigdecimal.c +727/-2282
ext/bigdecimal/bigdecimal.h +4/-25
ext/bigdecimal/bits.h +3/-0
ext/bigdecimal/extconf.rb +3/-7
ext/bigdecimal/missing.h +1/-93
lib/bigdecimal.rb +355/-0
lib/bigdecimal/jacobian.rb +2/-0
lib/bigdecimal/ludcmp.rb +2/-0
lib/bigdecimal/math.rb +788/-71
lib/bigdecimal/newton.rb +2/-0
lib/bigdecimal/util.rb +15/-14 |
Contributor
|
Contributor
gem compare sequel_pg 1.17.2 1.18.2Compared versions: ["1.17.2", "1.18.2"]
DIFFERENT date:
1.17.2: 2025-03-14 00:00:00 UTC
1.18.2: 1980-01-02 00:00:00 UTC
DIFFERENT require_paths:
1.17.2: ["/opt/hostedtoolcache/Ruby/3.4.8/x64/lib/ruby/gems/3.4.0/extensions/x86_64-linux/3.4.0/sequel_pg-1.17.2", "lib"]
1.18.2: ["/opt/hostedtoolcache/Ruby/3.4.8/x64/lib/ruby/gems/3.4.0/extensions/x86_64-linux/3.4.0/sequel_pg-1.18.2", "lib"]
DIFFERENT rubygems_version:
1.17.2: 3.6.2
1.18.2: 3.6.9
DIFFERENT version:
1.17.2: 1.17.2
1.18.2: 1.18.2
DIFFERENT files:
1.17.2->1.18.2:
* Added:
lib/sequel_pg/model.rb +48/-0
* Changed:
CHANGELOG +26/-0
README.rdoc +14/-1
ext/sequel_pg/extconf.rb +2/-0
ext/sequel_pg/sequel_pg.c +224/-25
lib/sequel_pg/sequel_pg.rb +96/-29
DIFFERENT extra_rdoc_files:
1.17.2->1.18.2:
* Changed:
CHANGELOG +26/-0
README.rdoc +14/-1 |
Contributor
gem compare --diff sequel_pg 1.17.2 1.18.2Compared versions: ["1.17.2", "1.18.2"]
DIFFERENT files:
1.17.2->1.18.2:
* Added:
lib/sequel_pg/model.rb
--- /tmp/20251219-664-ousg11 2025-12-19 03:02:59.554890709 +0000
+++ /tmp/d20251219-664-8vh6sz/sequel_pg-1.18.2/lib/sequel_pg/model.rb 2025-12-19 03:02:59.553890696 +0000
@@ -0,0 +1,48 @@
+class Sequel::Postgres::Dataset
+ # If model loads are being optimized and this is a model load, use the optimized
+ # version.
+ def each(&block)
+ rp = row_proc
+ return super unless allow_sequel_pg_optimization? && optimize_model_load?(rp)
+ clone(:_sequel_pg_type=>:model, :_sequel_pg_value=>rp).fetch_rows(sql, &block)
+ end
+
+ # Avoid duplicate method warning
+ alias with_sql_all with_sql_all
+
+ # Always use optimized version
+ def with_sql_all(sql, &block)
+ rp = row_proc
+ return super unless allow_sequel_pg_optimization?
+
+ if optimize_model_load?(rp)
+ clone(:_sequel_pg_type=>:all_model, :_sequel_pg_value=>row_proc).fetch_rows(sql) do |array|
+ post_load(array)
+ array.each(&block) if block
+ return array
+ end
+ []
+ else
+ clone(:_sequel_pg_type=>:all).fetch_rows(sql) do |array|
+ if rp = row_proc
+ array.map!{|h| rp.call(h)}
+ end
+ post_load(array)
+ array.each(&block) if block
+ return array
+ end
+ []
+ end
+ end
+
+ private
+
+ # The model load can only be optimized if it's for a model and it's not a graphed dataset
+ # or using a cursor.
+ def optimize_model_load?(rp)
+ rp.is_a?(Class) &&
+ rp < Sequel::Model &&
+ rp.method(:call).owner == Sequel::Model::ClassMethods &&
+ opts[:optimize_model_load] != false
+ end
+end
* Changed:
CHANGELOG
--- /tmp/d20251219-664-8vh6sz/sequel_pg-1.17.2/CHANGELOG 2025-12-19 03:02:59.549890641 +0000
+++ /tmp/d20251219-664-8vh6sz/sequel_pg-1.18.2/CHANGELOG 2025-12-19 03:02:59.551890668 +0000
@@ -0,0 +1,26 @@
+=== 1.18.2 (2025-12-18)
+
+* Avoid bogus warnings about the implicit block parameter in Ruby 3.3 (jeremyevans) (#64)
+
+=== 1.18.1 (2025-12-16)
+
+* Fix truncated results for map/select_map/select_order_map/as_hash/to_hash_groups/select_hash/select_hash_groups/as_set/select_set for datasets using use_cursor (jeremyevans) (#62)
+
+* Avoid compilation warnings checking for HAVE_* definitions (jeremyevans)
+
+=== 1.18.0 (2025-12-01)
+
+* Optimize Dataset#all and #with_sql_all (jeremyevans)
+
+* Fix runtime warnings when using Dataset#as_hash and #to_hash_groups with invalid columns (jeremyevans)
+
+* Fix Dataset#map return value when the null_dataset extension is used (jeremyevans)
+
+* Further optimize Dataset#as_set and #select_set on Ruby 4+ using core Set C-API (jeremyevans)
+
+* Use rb_hash_new_capa if available to avoid unnecessary hash resizing (jeremyevans)
+
+* Further optimize Dataset#map and #select_map by populating array in C instead of yielding to Ruby (jeremyevans)
+
+* Optimize Dataset#as_set and #select_set in Sequel 5.99+ (jeremyevans)
+
README.rdoc
--- /tmp/d20251219-664-8vh6sz/sequel_pg-1.17.2/README.rdoc 2025-12-19 03:02:59.549890641 +0000
+++ /tmp/d20251219-664-8vh6sz/sequel_pg-1.18.2/README.rdoc 2025-12-19 03:02:59.552890682 +0000
@@ -129,0 +130 @@
+
@@ -131,0 +133 @@
+
@@ -136 +138,12 @@
- gem 'sequel_pg', :require=>'sequel'
+ gem 'sequel_pg', require: 'sequel'
+
+* Using a precompiled pg gem can cause issues in certain cases,
+ since it statically links a libpq that could differ from the system
+ libpq dynamically linked to the sequel_pg gem. You can work around
+ the issue by forcing the ruby platform for the pg gem:
+
+ # Manual install
+ gem install pg --platform ruby
+
+ # Gemfile
+ gem 'pg', force_ruby_platform: true
ext/sequel_pg/extconf.rb
--- /tmp/d20251219-664-8vh6sz/sequel_pg-1.17.2/ext/sequel_pg/extconf.rb 2025-12-19 03:02:59.550890655 +0000
+++ /tmp/d20251219-664-8vh6sz/sequel_pg-1.18.2/ext/sequel_pg/extconf.rb 2025-12-19 03:02:59.552890682 +0000
@@ -9,0 +10,2 @@
+ have_func 'rb_hash_new_capa'
+ have_func 'rb_set_new_capa'
ext/sequel_pg/sequel_pg.c
--- /tmp/d20251219-664-8vh6sz/sequel_pg-1.17.2/ext/sequel_pg/sequel_pg.c 2025-12-19 03:02:59.550890655 +0000
+++ /tmp/d20251219-664-8vh6sz/sequel_pg-1.18.2/ext/sequel_pg/sequel_pg.c 2025-12-19 03:02:59.553890696 +0000
@@ -1 +1 @@
-#define SEQUEL_PG_VERSION_INTEGER 11702
+#define SEQUEL_PG_VERSION_INTEGER 11802
@@ -26,0 +27,4 @@
+#ifndef HAVE_RB_HASH_NEW_CAPA
+#define rb_hash_new_capa(_) rb_hash_new()
+#endif
+
@@ -67,0 +72,10 @@
+#define SPG_YIELD_COLUMN_ARRAY 14
+#define SPG_YIELD_COLUMNS_ARRAY 15
+#define SPG_YIELD_FIRST_ARRAY 16
+#define SPG_YIELD_ARRAY_ARRAY 17
+#define SPG_YIELD_COLUMN_SET 18
+#define SPG_YIELD_COLUMNS_SET 19
+#define SPG_YIELD_FIRST_SET 20
+#define SPG_YIELD_ARRAY_SET 21
+#define SPG_YIELD_ALL 22
+#define SPG_YIELD_ALL_MODEL 23
@@ -93,0 +108,2 @@
+static VALUE spg_sym_map_array;
+static VALUE spg_sym_map_set;
@@ -94,0 +111,2 @@
+static VALUE spg_sym_first_array;
+static VALUE spg_sym_first_set;
@@ -95,0 +114,2 @@
+static VALUE spg_sym_array_array;
+static VALUE spg_sym_array_set;
@@ -98,0 +119,2 @@
+static VALUE spg_sym_all;
+static VALUE spg_sym_all_model;
@@ -174 +196 @@
-#if HAVE_PQSETSINGLEROWMODE
+#ifdef HAVE_PQSETSINGLEROWMODE
@@ -1407,0 +1430,14 @@
+ } else if (pg_type == spg_sym_map_array) {
+ if (SYMBOL_P(pg_value)) {
+ type = SPG_YIELD_COLUMN_ARRAY;
+ } else if (rb_type(pg_value) == T_ARRAY) {
+ type = SPG_YIELD_COLUMNS_ARRAY;
+ }
+#ifdef HAVE_RB_SET_NEW_CAPA
+ } else if (pg_type == spg_sym_map_set) {
+ if (SYMBOL_P(pg_value)) {
+ type = SPG_YIELD_COLUMN_SET;
+ } else if (rb_type(pg_value) == T_ARRAY) {
+ type = SPG_YIELD_COLUMNS_SET;
+ }
+#endif
@@ -1411,0 +1448,10 @@
+ } else if (pg_type == spg_sym_first_array) {
+ type = SPG_YIELD_FIRST_ARRAY;
+ } else if (pg_type == spg_sym_array_array) {
+ type = SPG_YIELD_ARRAY_ARRAY;
+#ifdef HAVE_RB_SET_NEW_CAPA
+ } else if (pg_type == spg_sym_first_set) {
+ type = SPG_YIELD_FIRST_SET;
+ } else if (pg_type == spg_sym_array_set) {
+ type = SPG_YIELD_ARRAY_SET;
+#endif
@@ -1430,0 +1477,4 @@
+ } else if (pg_type == spg_sym_all_model && rb_type(pg_value) == T_CLASS) {
+ type = SPG_YIELD_ALL_MODEL;
+ } else if (pg_type == spg_sym_all) {
+ type = SPG_YIELD_ALL;
@@ -1439 +1489 @@
- h = rb_hash_new();
+ h = rb_hash_new_capa(nfields);
@@ -1465,0 +1516,57 @@
+ case SPG_YIELD_COLUMN_ARRAY:
+ /* Array containing single column */
+ {
+ VALUE ary = rb_ary_new2(ntuples);
+ j = spg__field_id(pg_value, colsyms, nfields);
+ if (j == -1) {
+ for(i=0; i<ntuples; i++) {
+ rb_ary_store(ary, i, Qnil);
+ }
+ }
+ else {
+ for(i=0; i<ntuples; i++) {
+ rb_ary_store(ary, i, spg__col_value(self, res, i, j, colconvert, enc_index));
+ }
+ }
+ rb_yield(ary);
+ }
+ break;
+ case SPG_YIELD_COLUMNS_ARRAY:
+ /* Array containing arrays of columns */
+ {
+ VALUE ary = rb_ary_new2(ntuples);
+ h = spg__field_ids(pg_value, colsyms, nfields);
+ for(i=0; i<ntuples; i++) {
+ rb_ary_store(ary, i, spg__col_values(self, h, colsyms, nfields, res, i, colconvert, enc_index));
+ }
+ rb_yield(ary);
+ }
+ break;
+#ifdef HAVE_RB_SET_NEW_CAPA
+ case SPG_YIELD_COLUMN_SET:
+ /* Set containing single column */
+ {
+ VALUE set = rb_set_new_capa(ntuples);
+ j = spg__field_id(pg_value, colsyms, nfields);
+ if (j == -1) {
+ rb_set_add(set, Qnil);
+ } else {
+ for(i=0; i<ntuples; i++) {
+ rb_set_add(set, spg__col_value(self, res, i, j, colconvert, enc_index));
+ }
+ }
+ rb_yield(set);
+ }
+ break;
+ case SPG_YIELD_COLUMNS_SET:
+ /* Set containing arrays of columns */
+ {
+ VALUE set = rb_set_new_capa(ntuples);
+ h = spg__field_ids(pg_value, colsyms, nfields);
+ for(i=0; i<ntuples; i++) {
+ rb_set_add(set, spg__col_values(self, h, colsyms, nfields, res, i, colconvert, enc_index));
+ }
+ rb_yield(set);
+ }
+ break;
+#endif
@@ -1481,0 +1589,46 @@
+ case SPG_YIELD_FIRST_ARRAY:
+ /* Array of first column */
+ h = rb_ary_new2(ntuples);
+ for(i=0; i<ntuples; i++) {
+ rb_ary_store(h, i, spg__col_value(self, res, i, 0, colconvert, enc_index));
+ }
+ rb_yield(h);
+ break;
+ case SPG_YIELD_ARRAY_ARRAY:
+ /* Array of arrays of all columns */
+ {
+ VALUE ary = rb_ary_new2(ntuples);
+ for(i=0; i<ntuples; i++) {
+ h = rb_ary_new2(nfields);
+ for(j=0; j<nfields; j++) {
+ rb_ary_store(h, j, spg__col_value(self, res, i, j, colconvert, enc_index));
+ }
+ rb_ary_store(ary, i, h);
+ }
+ rb_yield(ary);
+ }
+ break;
+#ifdef HAVE_RB_SET_NEW_CAPA
+ case SPG_YIELD_FIRST_SET:
+ /* Array of first column */
+ h = rb_set_new_capa(ntuples);
+ for(i=0; i<ntuples; i++) {
+ rb_set_add(h, spg__col_value(self, res, i, 0, colconvert, enc_index));
+ }
+ rb_yield(h);
+ break;
+ case SPG_YIELD_ARRAY_SET:
+ /* Array of arrays of all columns */
+ {
+ VALUE set = rb_set_new_capa(ntuples);
+ for(i=0; i<ntuples; i++) {
+ h = rb_ary_new2(nfields);
+ for(j=0; j<nfields; j++) {
+ rb_ary_store(h, j, spg__col_value(self, res, i, j, colconvert, enc_index));
+ }
+ rb_set_add(set, h);
+ }
+ rb_yield(set);
+ }
+ break;
+#endif
@@ -1487 +1640 @@
- h = rb_hash_new();
+ VALUE kv, vv;
@@ -1490,0 +1644 @@
+ h = rb_hash_new_capa(ntuples);
@@ -1492 +1646,3 @@
- rb_hash_aset(h, spg__col_value(self, res, i, k, colconvert, enc_index), spg__col_value(self, res, i, v, colconvert, enc_index));
+ kv = k == -1 ? Qnil : spg__col_value(self, res, i, k, colconvert, enc_index);
+ vv = v == -1 ? Qnil : spg__col_value(self, res, i, v, colconvert, enc_index);
+ rb_hash_aset(h, kv, vv);
@@ -1495 +1651,2 @@
- VALUE kv, vv, a;
+ VALUE a;
+ h = rb_hash_new();
@@ -1497,2 +1654,2 @@
- kv = spg__col_value(self, res, i, k, colconvert, enc_index);
- vv = spg__col_value(self, res, i, v, colconvert, enc_index);
+ kv = k == -1 ? Qnil : spg__col_value(self, res, i, k, colconvert, enc_index);
+ vv = v == -1 ? Qnil : spg__col_value(self, res, i, v, colconvert, enc_index);
@@ -1514 +1671 @@
- VALUE k;
+ VALUE k, vv;
@@ -1516 +1672,0 @@
- h = rb_hash_new();
@@ -1519,0 +1676 @@
+ h = rb_hash_new_capa(ntuples);
@@ -1521 +1678,2 @@
- rb_hash_aset(h, spg__col_values(self, k, colsyms, nfields, res, i, colconvert, enc_index), spg__col_value(self, res, i, v, colconvert, enc_index));
+ vv = v == -1 ? Qnil : spg__col_value(self, res, i, v, colconvert, enc_index);
+ rb_hash_aset(h, spg__col_values(self, k, colsyms, nfields, res, i, colconvert, enc_index), vv);
@@ -1524 +1682,2 @@
- VALUE kv, vv, a;
+ VALUE kv, a;
+ h = rb_hash_new();
@@ -1527 +1686 @@
- vv = spg__col_value(self, res, i, v, colconvert, enc_index);
+ vv = v == -1 ? Qnil : spg__col_value(self, res, i, v, colconvert, enc_index);
@@ -1543 +1702 @@
- VALUE v;
+ VALUE v, kv;
@@ -1545 +1703,0 @@
- h = rb_hash_new();
@@ -1548,0 +1707 @@
+ h = rb_hash_new_capa(ntuples);
@@ -1550 +1709,2 @@
- rb_hash_aset(h, spg__col_value(self, res, i, k, colconvert, enc_index), spg__col_values(self, v, colsyms, nfields, res, i, colconvert, enc_index));
+ kv = k == -1 ? Qnil : spg__col_value(self, res, i, k, colconvert, enc_index);
+ rb_hash_aset(h, kv, spg__col_values(self, v, colsyms, nfields, res, i, colconvert, enc_index));
@@ -1553 +1713,2 @@
- VALUE kv, vv, a;
+ VALUE vv, a;
+ h = rb_hash_new();
@@ -1555 +1716 @@
- kv = spg__col_value(self, res, i, k, colconvert, enc_index);
+ kv = k == -1 ? Qnil : spg__col_value(self, res, i, k, colconvert, enc_index);
@@ -1573 +1733,0 @@
- h = rb_hash_new();
@@ -1576,0 +1737 @@
+ h = rb_hash_new_capa(ntuples);
@@ -1581,0 +1743 @@
+ h = rb_hash_new();
@@ -1599 +1761 @@
- h = rb_hash_new();
+ h = rb_hash_new_capa(nfields);
@@ -1608,0 +1771,29 @@
+ case SPG_YIELD_ALL_MODEL:
+ {
+ VALUE ary = rb_ary_new2(ntuples);
+ VALUE obj;
+ for(i=0; i<ntuples; i++) {
+ h = rb_hash_new_capa(nfields);
+ for(j=0; j<nfields; j++) {
+ rb_hash_aset(h, colsyms[j], spg__col_value(self, res, i, j, colconvert, enc_index));
+ }
+ obj = rb_obj_alloc(pg_value);
+ rb_ivar_set(obj, spg_id_values, h);
+ rb_ary_store(ary, i, obj);
+ }
+ rb_yield(ary);
+ }
+ break;
+ case SPG_YIELD_ALL:
+ {
+ VALUE ary = rb_ary_new2(ntuples);
+ for(i=0; i<ntuples; i++) {
+ h = rb_hash_new_capa(nfields);
+ for(j=0; j<nfields; j++) {
+ rb_hash_aset(h, colsyms[j], spg__col_value(self, res, i, j, colconvert, enc_index));
+ }
+ rb_ary_store(ary, i, h);
+ }
+ rb_yield(ary);
+ }
+ break;
@@ -1649 +1840 @@
-#if HAVE_PQSETSINGLEROWMODE
+#ifdef HAVE_PQSETSINGLEROWMODE
@@ -1656 +1847 @@
-#if HAVE_PQSETSINGLEROWMODE
+#ifdef HAVE_PQSETSINGLEROWMODE
@@ -1677 +1868 @@
- VALUE h = rb_hash_new();
+ VALUE h = rb_hash_new_capa(nfields);
@@ -1738 +1929 @@
- h = rb_hash_new();
+ h = rb_hash_new_capa(nfields);
@@ -1920,0 +2112,2 @@
+ spg_sym_map_array = ID2SYM(rb_intern("map_array"));
+ spg_sym_map_set = ID2SYM(rb_intern("map_set"));
@@ -1922,0 +2116,4 @@
+ spg_sym_first_array = ID2SYM(rb_intern("first_array"));
+ spg_sym_array_array = ID2SYM(rb_intern("array_array"));
+ spg_sym_first_set = ID2SYM(rb_intern("first_set"));
+ spg_sym_array_set = ID2SYM(rb_intern("array_set"));
@@ -1925,0 +2123,2 @@
+ spg_sym_all = ID2SYM(rb_intern("all"));
+ spg_sym_all_model = ID2SYM(rb_intern("all_model"));
@@ -1997 +2196 @@
-#if HAVE_PQSETSINGLEROWMODE
+#ifdef HAVE_PQSETSINGLEROWMODE
lib/sequel_pg/sequel_pg.rb
--- /tmp/d20251219-664-8vh6sz/sequel_pg-1.17.2/lib/sequel_pg/sequel_pg.rb 2025-12-19 03:02:59.550890655 +0000
+++ /tmp/d20251219-664-8vh6sz/sequel_pg-1.18.2/lib/sequel_pg/sequel_pg.rb 2025-12-19 03:02:59.554890709 +0000
@@ -23,0 +24,21 @@
+ # :nocov:
+ if method_defined?(:as_set)
+ # :nocov:
+ if RUBY_VERSION > '4'
+ def as_set(column)
+ return super unless allow_sequel_pg_optimization?
+ clone(:_sequel_pg_type=>:map_set, :_sequel_pg_value=>column).fetch_rows(sql){|s| return s}
+ Set.new
+ end
+ # :nocov:
+ else
+ def as_set(column)
+ return super unless allow_sequel_pg_optimization?
+ rows = Set.new
+ clone(:_sequel_pg_type=>:map, :_sequel_pg_value=>column).fetch_rows(sql){|s| rows.add(s)}
+ rows
+ end
+ end
+ # :nocov:
+ end
+
@@ -30,3 +51,3 @@
- rows = []
- clone(:_sequel_pg_type=>:map, :_sequel_pg_value=>sym).fetch_rows(sql){|s| rows << s}
- rows
+ return super unless allow_sequel_pg_optimization?
+ clone(:_sequel_pg_type=>:map_array, :_sequel_pg_value=>sym).fetch_rows(sql){|a| return a}
+ []
@@ -46,0 +68 @@
+ return super unless allow_sequel_pg_optimization?
@@ -66,0 +89 @@
+ return super unless allow_sequel_pg_optimization?
@@ -76,8 +99,14 @@
- if defined?(Sequel::Model::ClassMethods)
- # If model loads are being optimized and this is a model load, use the optimized
- # version.
- def each(&block)
- if optimize_model_load?
- clone(:_sequel_pg_type=>:model, :_sequel_pg_value=>row_proc).fetch_rows(sql, &block)
- else
- super
+ # Delegate to with_sql_all using the default SQL
+ def all(&block)
+ with_sql_all(sql, &block)
+ end
+
+ # :nocov:
+ # Generally overridden by the model support, only used if the model
+ # support is not used.
+ def with_sql_all(sql, &block)
+ return super unless allow_sequel_pg_optimization?
+
+ clone(:_sequel_pg_type=>:all).fetch_rows(sql) do |array|
+ if rp = row_proc
+ array.map!{|h| rp.call(h)}
@@ -84,0 +114,3 @@
+ post_load(array)
+ array.each(&block) if block
+ return array
@@ -85,0 +118 @@
+ []
@@ -86,0 +120 @@
+ # :nocov:
@@ -92,3 +126,3 @@
- rows = []
- clone(:_sequel_pg_type=>:array).fetch_rows(sql){|s| rows << s}
- rows
+ return super unless allow_sequel_pg_optimization?
+ clone(:_sequel_pg_type=>:array_array).fetch_rows(sql){|a| return a}
+ []
@@ -99,3 +133,3 @@
- rows = []
- clone(:_sequel_pg_type=>:first).fetch_rows(sql){|s| rows << s}
- rows
+ return super unless allow_sequel_pg_optimization?
+ clone(:_sequel_pg_type=>:first_array).fetch_rows(sql){|a| return a}
+ []
@@ -104 +138,10 @@
- private
+ # :nocov:
+ if method_defined?(:_select_set_multiple)
+ # :nocov:
+ if RUBY_VERSION > '4'
+ # Always use optimized version
+ def _select_set_multiple(ret_cols)
+ return super unless allow_sequel_pg_optimization?
+ clone(:_sequel_pg_type=>:array_set).fetch_rows(sql){|s| return s}
+ Set.new
+ end
@@ -106,11 +149,23 @@
- if defined?(Sequel::Model::ClassMethods)
- # The model load can only be optimized if it's for a model and it's not a graphed dataset
- # or using a cursor.
- def optimize_model_load?
- (rp = row_proc) &&
- rp.is_a?(Class) &&
- rp < Sequel::Model &&
- rp.method(:call).owner == Sequel::Model::ClassMethods &&
- opts[:optimize_model_load] != false &&
- !opts[:use_cursor] &&
- !opts[:graph]
+ # Always use optimized version
+ def _select_set_single
+ return super unless allow_sequel_pg_optimization?
+ clone(:_sequel_pg_type=>:first_set).fetch_rows(sql){|s| return s}
+ Set.new
+ end
+ # :nocov:
+ else
+ # Always use optimized version
+ def _select_set_multiple(ret_cols)
+ return super unless allow_sequel_pg_optimization?
+ set = Set.new
+ clone(:_sequel_pg_type=>:array).fetch_rows(sql){|s| set.add s}
+ set
+ end
+
+ # Always use optimized version
+ def _select_set_single
+ return super unless allow_sequel_pg_optimization?
+ set = Set.new
+ clone(:_sequel_pg_type=>:first).fetch_rows(sql){|s| set.add s}
+ set
+ end
@@ -117,0 +173,12 @@
+ # :nocov:
+ end
+
+ if defined?(Sequel::Model::ClassMethods)
+ require_relative 'model'
+ end
+
+ private
+
+ # Whether to allow sequel_pg to optimize the each/all/with_sql_all call.
+ def allow_sequel_pg_optimization?
+ (!opts[:graph] || opts[:eager_graph]) && !opts[:cursor] |
785ffd0 to
4eeb7f6
Compare
Bumps [sequel_pg](https://github.com/jeremyevans/sequel_pg) from 1.17.2 to 1.18.2. - [Changelog](https://github.com/jeremyevans/sequel_pg/blob/master/CHANGELOG) - [Commits](jeremyevans/sequel_pg@1.17.2...1.18.2) --- updated-dependencies: - dependency-name: sequel_pg dependency-version: 1.18.2 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com>
4eeb7f6 to
34cdf27
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Bumps sequel_pg from 1.17.2 to 1.18.2.
Changelog
Sourced from sequel_pg's changelog.
Commits
de99106Bump version to 1.18.272bf00aAvoid bogus warnings about the implicit block parameter in Ruby 3.3 (Fixes #64)556edaaBump version to 1.18.182f812eFix truncated results for map/select_map/select_order_map/as_hash/to_hash_gro...998030dAvoid compilation warnings checking for HAVE_* definitions9b80d37Bump version to 1.18.084388b6Move Sequel::Model related optimization code to sequel_pg/model2407329Optimize Dataset#all and #with_sql_all96fc41eFix runtime warnings when using Dataset#as_hash and #to_hash_groups with inva...fe1dc78Fix Dataset#map return value when the null_dataset extension is usedDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)