Delete NULL lines SQL

Clean a table removing lines where all cols values are NULL

Execute this SQL

WITH t AS (
VALUES (NULL, NULL),
  ('john', 'smith'),
  ('mike', NULL),
  (NULL, 'jones'),
  (NULL, NULL)
) FROM t
EXCEPT
FROM t
WHERE COLUMNS(*) IS NULL ;

Copy code

MAUVIERE

Expand

Share link


Macro to glimpse the table in the current DBSQL

An attempt to replicate the functionality of dplyrs (and polars) glimpse, with a simple function which takes the name of a table in the current database as a quoted string. Shows column name, data types and the first few values. Intended for use in the CLI. usage: FROM glimpse('table_name');

Execute this SQL

CREATE OR REPLACE MACRO glimpse(table_name) AS TABLE
WITH TableSchema AS (
    
    SELECT
        cid,  
        name AS column_name,
        type AS column_type
    FROM pragma_table_info(table_name)
),
SampleData AS (
    -- Select the first 5 rows from the target table
    SELECT *
    FROM query_table(table_name)
    LIMIT 5
),
SampleDataUnpivoted AS (
    -- Unpivot the sample data: columns become rows
    UNPIVOT (SELECT list(COLUMNS(*)::VARCHAR) FROM SampleData)
    ON COLUMNS(*)
    INTO
        NAME column_name
        VALUE sample_values_list -- This will be a list of strings
)
-- Final selection joining schema info with sample data
SELECT
    ts.column_name,
    ts.column_type,
    -- Convert the list to string and remove brackets for cleaner display
    regexp_replace(usp.sample_values_list::VARCHAR, '^\[|\]$', '', 'g') AS sample_data
FROM TableSchema ts
JOIN SampleDataUnpivoted usp ON ts.column_name = usp.column_name
ORDER BY ts.cid; 

Copy code

Steve Crawshaw

Expand

Share link


Easy GraphQL QueryingSQL

This uses the http_client and json extensions to create a (relatively) flexible and easy to use GraphQL reader. This only supports POST, but it'd be pretty easy to change to GET. There are parameters to control what and how the results are extracted into a DuckDB table.

Execute this SQL

INSTALL http_client FROM community; 
LOAD json; 
LOAD http_client; 
-- Run a GraphQL query and get back fields with various diagnostics and intermediate values 
CREATE OR REPLACE MACRO 
  query_graphql(endpoint, query,  -- GraphGQL endpoint and query
                params:=NULL,  -- GraphQL parameters
                headers:=MAP{},  -- HTTP headers (e.g., auth)
                result_path:='$',  -- json_path to use as results
                result_structure:=NULL,  -- json_structure for results or auto detect
                strict:=true  -- Error on failure to transform results
  ) AS TABLE
  SELECT
    response: http_post(endpoint, headers, IF(params IS NULL, {'query': query}, {'query': query, 'params': params})),
    body: (response->>'body')::JSON,
    json: body->result_path,
    structure: IF(result_structure IS NULL, json_structure(json), result_structure),
     result: IF(strict, from_json_strict(json, structure), from_json(json, structure));  

-- Simple helper to extract GraphQL results to a "normal" table 
CREATE OR REPLACE MACRO 
  read_graphql(endpoint, query,  -- Same as above ...
               params:=NULL, 
               headers:=MAP{}, 
               result_path:='$', 
               result_structure:=NULL, 
               strict:=true,  -- ... Same as above
               unnest_levels:=0  -- Apply unnest this many times to create table
  ) AS TABLE
  WITH nested AS (
    SELECT [result] AS result
    FROM query_graphql(endpoint, query,
                       params:=params,
                       headers:=headers,
                       result_path:=result_path,
                       result_structure:=result_structure, 
                       strict:=strict)
  ) 
 SELECT unnest(result, max_depth:=unnest_levels+1) AS result 
 FROM nested;


------------------- Example usage ---------------------
FROM read_graphql('https://rickandmortyapi.com/graphql', 
  $$
  query Query($name: String) {
    characters(page: 2, filter: {name: $name}) {
      info { count }
      results { id name gender }
    }
  }
  $$,
  params:={'name': 'Morty'},    
  result_path:='$.data.characters.results',
  result_structure:=[{id:'int',name:'string',gender:'string'}],    
  unnest_levels:=2
);

-- Example Results ------------------
-- ┌─────────┬────────────────────┬─────────┐ 
-- │   id    │        name        │ gender  │ 
-- │  int32  │      varchar       │ varchar │ 
-- ├─────────┼────────────────────┼─────────┤ 
-- │ 21      │ Aqua Morty         │ Male    │ 
-- ... 
-- ├─────────┴────────────────────┴─────────┤ 
-- │ 20 rows                      3 columns │ 
-- └────────────────────────────────────────┘

Copy code

Teague Sterling

Expand

Share link


DuckDB to geojsonSQL

Snippet helps put correctly formatted geojson file using spatial and json extensions from any table/query.

Execute this SQL

COPY (
    SELECT
        *, -- all these field will become properties in resulting geojson (doesnt work well with map-like columns)
 	ST_GeomFromText(wkt) as geometry -- this is the main geometry entry    
    FROM temp_table      
) TO 'test.geojson' WITH (FORMAT GDAL, DRIVER 'GeoJSON', LAYER_NAME 'test');

Copy code

Shabbir Marzban

Expand

Share link


Export/import/share DuckDB UI Notebooks

The DuckDB UI stores notebook content in an internal database called _duckdb_ui. You can query and export notebook content, as well as insert new definitions into the database. Warning: Modifying the internal database may lead to corruption and data loss. Be cautious and use it at your own risk!

Export a notebook definition to JSONSQL

copy (
  select
    "json"
  from _duckdb_ui.notebook_versions
  where 1=1
    and title = 'MySingleNotebook'
    and expires is null
) to 'exported-notebook.json'

Copy code

Import notebook definitionSQL

set variable notebook_content = (select json from 'exported-notebook.json');
set variable notebook_id = uuid();
set variable current_timestamp = now();

begin transaction;
  insert into _duckdb_ui.notebooks (id, name, created)
  select
    getvariable('notebook_id'),
    'notebook_' || getvariable('notebook_id'),
    getvariable('current_timestamp')
  ;

  insert into _duckdb_ui.notebook_versions (notebook_id, version, title, json, created, expires)
  select
    getvariable('notebook_id'),
    1,
    'imported-notebook-' || getvariable('current_timestamp'),
    getvariable('notebook_content'),
    getvariable('current_timestamp'),
    null
  ;
commit;

Copy code

Expand

Share link


Detect Schema Changes Across Datasets (Python)Python

Compare the schema of two datasets and identify any differences.

Execute this Python

import duckdb

def compare_schemas(file1, file2):
    """
    Compare schemas of two datasets and find differences.
    Args:
        file1 (str): Path to the first dataset (CSV/Parquet).
        file2 (str): Path to the second dataset (CSV/Parquet).
    Returns:
        list: Schema differences.
    """
    con = duckdb.connect()
    schema1 = con.execute(f"DESCRIBE SELECT * FROM read_csv_auto('{file1}')").fetchall()
    schema2 = con.execute(f"DESCRIBE SELECT * FROM read_csv_auto('{file2}')").fetchall()
    return {"file1_schema": schema1, "file2_schema": schema2}

# Example Usage
differences = compare_schemas("data1.csv", "data2.csv")
print(differences)

Copy code

Expand

Share link


Remove Duplicate Records from a CSV File (Bash)Bash

This function helps clean up a dataset by identifying and removing duplicate records. It’s especially useful for ensuring data integrity before analysis.

Execute this Bash

#!/bin/bash
function remove_duplicates() {
    input_file="$1"  # Input CSV file with duplicates
    output_file="$2" # Deduplicated output CSV file
    # Use DuckDB to remove duplicate rows and write the cleaned data to a new CSV file.
    duckdb -c "COPY (SELECT DISTINCT * FROM read_csv_auto('$input_file')) TO '$output_file' (FORMAT CSV, HEADER TRUE);"
}

#Usage remove_duplicates "input_data.csv" "cleaned_data.csv"

Copy code

Expand

Share link