Skip to content

Common questions (Python)

How to make the script support both local and cloud run

How to make the script support both local and cloud run

Flexible authorization

When the script runs in the cloud, it will provide a context object, which contains the server URL auto generated by the system and the API token of base. If you run the script in local, you need to manually specify these two variables; the API token can be generated in the drop-down menu "Advanced -> API Token" of the table.

Use the following method to make the script support both local and cloud run

from seatable_api import Base, context

server_url = context.server_url or 'https://cloud.seatable.io'
api_token = context.api_token or 'c3c75dca2c369848455a39f4436147639cf02b2d'


base = Base(api_token, server_url)
base.auth()

Dependencies that need to be installed to run the script local

The script need to install seatable-api when run in local.

pip3 install seatable-api

Additional requirements are:

  • Python >= 3.5
  • requests
  • socketIO-client-nexus
List of libraries supported in the cloud environment

List of libraries supported in the cloud environment

In the cloud environment, Python scripts run within a Docker container. This container comes pre-configured with a set of Python libraries that can be imported and used in your scripts. If you require libraries not included in this set, please contact our support team. Otherwise, scripts using unsupported libraries can only be executed locally.

Python Standard Library

The cloud environment currently utilizes Python 3.12. This version supports all modules in the Python 3.12 standard library. Common built-in libraries such as os, sys, datetime, and others are readily available for use in your scripts.

Third-Party Libraries

In addition to the standard library, we've included several popular third-party packages to enhance your scripting capabilities:

  • seatable-api: Official SeaTable Python API
  • dateutils: Extensions to Python's datetime module
  • requests: HTTP library for Python
  • pyOpenSSL: Python wrapper for OpenSSL
  • Pillow: Python Imaging Library (Fork) with support for HEIF images
  • python-barcode: Barcode generator
  • qrcode: QR Code generator
  • pandas: Data manipulation and analysis library
  • numpy: Fundamental package for scientific computing
  • openai: OpenAI API client library
  • ldap3: LDAP v3 client library
  • pydantic: Data validation and settings management using Python type annotations
  • httpx: A next-generation HTTP client for Python
  • PyJWT: JSON Web Token implementation in Python
  • python-socketio: Python implementation of the Socket.IO realtime server
  • scipy: Fundamental algorithms for scientific computing in Python
  • PyPDF: PDF toolkit for Python
  • pdfmerge: Merge PDF files
  • Markdown: Convert Markdown to HTML
  • RapidFuzz: A fast string matching library using string similarity calculations

This list is not exhaustive. For a complete, up-to-date list of available third-party packages, you can run the following Python script in your SeaTable environment:

import importlib.metadata

# List all installed packages
installed_packages = importlib.metadata.distributions()

# Print package names
for package in installed_packages:
  print(package.metadata['Name'])
Install and use custom python libraries

Install and use custom python libraries

  • The python libraries in SeaTable Cloud can not be changed.
  • If you run your own SeaTable Server it is possible to install your own libraries.
Printing complex elements (dicts, tables, arrays of rows) is sometimes difficult to read

Printing complex elements is sometimes difficult to read

Do not hesitate to run your code in a Python IDE which could have specific features for data visualization (don't forget you won't be able to rely on context to provide api_token and server_url, see first question for dual run syntax). You could also use the json library to make the output of complex objects easier to read:

import json # (1)!
from seatable_api import Base, context
base = Base(context.api_token, context.server_url)
base.auth()

print(json.dumps(base.get_metadata(), indent=' ')) # (2)!
  1. Import the json library

  2. Print json.dumps(object, indent='  ') instead of just printing object. You have to explicitly specify the indent character (which is not a classic space character) as the output window of SeaTable's script editor actually trims indent spaces.

How to deal with more than 1000 rows at once with batch operations?

Dealing with more than 1000 rows at once with batch operations

As presented in the API Reference, batch operations such as base.batch_append_rows, base.batch_update_rows, base.batch_delete_rows or base.batch_update_links have a maximum number of 1000 rows. To deal with a higher number of rows, you could:

  • Use an INSERT, UPDATE or DELETE SQL query that can operate on an unlimited number of rows

  • Use a while loop to split you operation into 1000-rows chunks for example (however this won't exactly be a single operation anymore):

from seatable_api import Base, context

base = Base(context.api_token, context.server_url)
base.auth()

# You want to batch append new_rows which is more than 1000-rows long
while len(new_rows)>0 :
    end = min(1000, len(new_rows))
    rows_chunk = new_rows[:end]
    print(f"{rows_chunk[0]['Name']} > {rows_chunk[-1]['Name']}")
    base.batch_append_rows("Table1", rows_chunk)
    new_rows = new_rows[end:len(new_rows)]

To batch update links, the loop will be slightly more complex as you'll have to deal with other_rows_ids_map as well