Hi there,
Happy first day of spring! We liked the sound of it so much that we renamed this release from the "End of Winter" to the Early Spring Release. Same great features, better branding. (The daffodils on Broadway agree.π·)
This update delivers powerful new capabilities across the Connector Copilot and Workflow Builder. You can now connect to APIs that require multiple sequential requests and add datasets to workflows with a richer, more visual experience. We've also published a new Quick Start Guide to help you get up and running with the Connector Copilot in minutes.
π Multi-Request API Support in Connector Copilot
The Connector Copilot now supports APIs that require multiple sequential requests to retrieve a complete dataset, with no custom code or manual orchestration required.
Many real-world enterprise APIs are asynchronous by design. You submit a request, receive a job ID or polling URL, and then check back until the results are ready. This pattern is especially common in data and analytics platforms such as Snowflake's SQL and Snowpark APIs, Google BigQuery jobs, dbt Cloud job runs, and BI services like Power BI and Tableau, which expose longβrunning refresh operations via REST. These workloads power downstream reporting, planning, and analytics, but until now, the async orchestration has typically required custom glue outside the platform. With this release, Connector Copilot can automatically drive the entire sequence.
How It Works
When you create a dataset from an API that uses async or multi-step patterns, the Connector Copilot will:
- Detect dependencies between API requests from the OpenAPI specification
- Chain requests in the correct order, passing identifiers (such as job IDs or export links) between steps
- Poll status endpoints automatically until results are available
- Flatten the final response into a structured, analytics-ready dataset
Supported Patterns
This enhancement covers a range of common API workflows:
-
Async job exports (new): Submit a report or query request, receive a job ID, poll for completion, then retrieve the results
-
Paginated sequences: Follow
next links or cursor tokens across multiple pages to assemble a full dataset
-
Dependent lookups: Use the output of one endpoint as input to another (e.g., list models, then fetch run results per model)
π§ͺ Quick Start Guide: Fetch FX Rates with Connector Copilot
Want to see the Connector Copilot in action? We've published a new Quick Start Guide that walks you through creating a dataset from a live foreign exchange rates API and using it in a Pipeline build.
You'll use the Exchange Rate API Open API specification to:
- Connect to the API using the Connector Copilot
- Describe the dataset you want using natural language
- Build a Pipeline that transforms the FX rate data for downstream use
It's the fastest way to experience the full Connector Copilot workflow, from API spec to analytics-ready data, in just a few minutes.
π View the Quick Start Guide β
ποΈ Improved Dataset Selection in Workflows
Adding datasets to your workflows is now a richer, more interactive experience. The previous text-only search has been replaced with a visual dataset picker that gives you more insight into the data you're selecting before you commit.
What's New
When you click "Add" on the Data tab of a Workflow, you'll now see:
- A visual modal with dataset previews and metadata
- Clearer identification of dataset sources and types
- A more intuitive search and browse experience
This makes it easier to find and select the right datasets, especially when working with large libraries of connected data.
π View full documentation β
π§± Platform Fixes
-
ENG-879 β Bug Fix: Supporting pipelines added via the Workflow Builder were incorrectly saved as output pipelines. Workflows now correctly preserve the distinction between supporting and output pipelines.
-
ENG-881 β Bug Fix: Executing a pipeline with no transformation steps configured would enter an endless loop. The execution engine now detects empty pipelines and completes gracefully.
-
ENG-883 β Bug Fix: Saving an hourly cron schedule without entering a value in the Minutes field triggered an internal server error and corrupted the schedule state. The platform now validates the field before saving.
-
ENG-886 β Bug Fix: Mapping group conditions created by the AI Copilot were not visible in the mapping rules editor. The condition indicators now render correctly regardless of how the mapping group was generated.
-
ENG-887 β Bug Fix: The Dataset Builder failed to detect and apply transformations for API responses using the
additionalProperties map-of-arrays pattern. These dynamic structures are now flattened correctly into tabular data.
-
ENG-856 β Bug Fix: When an API endpoint returned an error after successful authentication in the Connector Copilot, the failure was silently swallowed with no feedback to the user. Error responses are now surfaced clearly.
π What's Next
Our spring roadmap is taking shape, and it's ambitious. Some of the features below have been on the horizon for a few releases. As our user base grows, we're proactively adjusting priorities based on real-world feedback and evolving use cases. These features remain central to our vision, and we're committed to delivering them in subsequent releases.
-
Holistic Pipeline Editing with Artifacts: Refine and revise entire pipelines conversationally, with the ability to make broad iterative changes rather than editing one transformation at a time
-
Build Integrations from Documentation: Generate Connector Copilot integrations directly from API documentation websites, even when no OpenAPI specification is available
-
Natural Language Mapping Groups: Create and edit standalone mapping groups entirely through natural language prompts, with full parity to the pipeline editing experience
-
Metadata-Aware Dataset Configuration: Automatically discover and use metadata APIs to populate dropdown options during dataset creation
-
Nexadata MCP Server: Interact with your data pipelines through natural language directly from Claude Desktop, enabling conversational pipeline management, troubleshooting, and schema updates without leaving your development environment
Thanks for being part of the Nexadata journey!
β The Nexadata Team