The watch mode in pages dev for Advanced Mode projects is currently partially broken, as it only watches for changes in the “_worker.js” file, but not for changes in any of its imported dependencies. This means that given the following “_worker.js” file
import { graham } from "./graham-the-dog";
export default {
fetch(request, env) {
return new Response(graham)
}
}
pages dev will reload for any changes in the _worker.js file itself, but not for any changes in graham-the-dog.js, which is its dependency.
Similarly, pages dev will not reload for any changes in non-JS module imports, such as wasm/html/binary module imports.
Sometimes, users want to replace modules with other modules. This commonly happens inside a third party dependency itself. As an example, a user might have imported node-fetch, which will probably never work in workerd. You can use the alias config to replace any of these imports with a module of your choice.
Let’s say you make a fetch-nolyfill.js
exportdefault fetch;// all this does is export the standard fetch function`
You can then configure wrangler.toml like so:
# ...
[alias]
"node-fetch": "./fetch-nolyfill"
So any calls to import fetch from 'node-fetch'; will simply use our nolyfilled version.
You can also pass aliases in the cli (for both dev and deploy). Like:
npx wrangler dev --alias node-fetch:./fetch-nolyfill
The watch mode in pages dev for Pages Functions projects is currently partially broken, as it only watches for file system changes in the
“/functions” directory, but not for changes in any of the Functions’ dependencies. This means that given a Pages Function math-is-fun.ts, defined as follows:
import { ADD } from "../math/add";
export async function onRequest() {
return new Response(`${ADD} is fun!`);
}
pages dev will reload for any changes in math-is-fun.ts itself, but not for any changes in math/add.ts, which is its dependency.
Similarly, pages dev will not reload for any changes in non-JS module imports, such as wasm/html/binary module imports.
This commit fixes all these things, plus adds some extra polish to the pages dev watch mode experience.
Trying to fetch /zones fails when it spans more than 500 zones. The fix to use an account id when doing so. This patch passes the account id to the zones call, threading it through all the functions that require it.
A new GA release for the macOS WARP client is now available in the App CenterOpen external link. This release includes some exciting new features. It also includes additional fixes and minor improvements.
New features:
Admins can now elect to have ZT WARP clients connect using the MASQUE protocol; this setting is in Device Profiles. Note: before MASQUE can be used, the global setting for Override local interface IP must be enabled. For more detail, refer to Device tunnel protocol. This feature will be rolled out to customers in stages over approximately the next month.
The Device Posture client certificate check has been substantially enhanced. The primary enhancement is the ability to check for client certificates that have unique common names, made unique by the inclusion of the device serial number or host name (for example, CN = 123456.mycompany, where 123456 is the device serial number).
Additional changes and improvements:
Fixed a known issue where the certificate was not always properly left behind in /Library/Application Support/Cloudflare/installed_cert.pem.
Fixed an issue where re-auth notifications were not cleared from the UI when the user switched configurations.
Fixed a macOS firewall rule that allowed all UDP traffic to go outside the tunnel. Relates to TunnelVision (CVE-2024-3661Open external link).
Fixed an issue that could cause the Cloudflare WARP menu bar application to disappear when switching configurations.
Warning:
This is the last GA release that will be supporting older, deprecated warp-cli commands. There are two methods to identify these commands. One, when used in this release, the command will work but will also return a deprecation warning. And two, the deprecated commands do not appear in the output of warp-cli -h.
Known issues:
If a user has an MDM file configured to support multiple profiles (for the switch configurations feature), and then changes to an MDM file configured for a single profile, the WARP client may not connect. The workaround is to use the warp-cli registration delete command to clear the registration, and then re-register the client.
There are certain known limitations preventing the use of the MASQUE tunnel protocol in certain scenarios. Do not use the MASQUE tunnel protocol if:
A Magic WAN integration is on the account and does not have the latest packet flow path for WARP traffic. Please check migration status with your account team.
A new GA release for the macOS WARP client is now available in the App CenterOpen external link. This release includes some exciting new features. It also includes additional fixes and minor improvements.
New features:
Admins can now elect to have ZT WARP clients connect using the MASQUE protocol; this setting is in Device Profiles. Note: before MASQUE can be used, the global setting for Override local interface IP must be enabled. For more detail, refer to Device tunnel protocol. This feature will be rolled out to customers in stages over approximately the next month.
The ZT WARP client on Windows devices can now connect before the user completes their Windows login. This Windows pre-login capability allows for connecting to on-premise Active Directory and/or similar resources necessary to complete the Windows login.
The Device Posture client certificate check has been substantially enhanced. The primary enhancement is the ability to check for client certificates that have unique common names, made unique by the inclusion of the device serial number or host name (for example, CN = 123456.mycompany, where 123456 is the device serial number).
The upgrade window now uses international date formats.
Made a change to ensure DEX tests are not running when the tunnel is not up due to the device going to or waking from sleep. This is specific to devices using the S3 power model.
Fixed a known issue where the certificate was not always properly left behind in %ProgramData%\Cloudflare\installed_cert.pem.
Fixed an issue where ICMPv6 Neighbor Solicitation messages were being incorrectly sent on the WARP tunnel.
Fixed an issue where a silent upgrade was causing certain files to be deleted if the target upgrade version is the same as the current version.
Warning:
This is the last GA release that will be supporting older, deprecated warp-cli commands. There are two methods to identify these commands. One, when used in this release, the command will work but will also return a deprecation warning. And two, the deprecated commands do not appear in the output of warp-cli -h.
Known issues:
If a user has an MDM file configured to support multiple profiles (for the switch configurations feature), and then changes to an MDM file configured for a single profile, the WARP client may not connect. The workaround is to use the warp-cli registration delete command to clear the registration, and then re-register the client.
There are certain known limitations preventing the use of the MASQUE tunnel protocol in certain scenarios. Do not use the MASQUE tunnel protocol if:
A Magic WAN integration is on the account and does not have the latest packet flow path for WARP traffic. Please check migration status with your account team.
wrangler versions secret put allows for you to add/update a secret even if the latest version is not fully deployed. A new version with this secret will be created, the existing secrets and config are copied from the latest version.
wrangler versions secret bulk allows you to bulk add/update multiple secrets at once, this behaves the same as secret put and will only make one new version.
wrangler versions secret list lists the secrets available to the currently deployed versions. wrangler versions secret list --latest-version or wrangler secret list will list for the latest version.
Additionally, we will now prompt for extra confirmation if attempting to rollback to a version with different secrets than the currently deployed.
This enables eslint (with our react config) for the workers-playground project. Additionally, this enables the react-jsx condition in relevant tsconfig/eslint config, letting us write jsx without having React in scope.
Exceptions thrown from Durable Object internal operations and tunneled to the caller may now be populated with a .retryable: true property if the exception was likely due to a transient failure, or populated with an .overloaded: true property if the exception was due to overload.
Exceptions thrown from Durable Object internal operations and tunneled to the caller may now be populated with a .retryable: true property if the exception was likely due to a transient failure, or populated with an .overloaded: true property if the exception was due to overload.
Stream has introduced automatically generated captions to open beta for all subscribers at no additional cost. While in beta, only English is supported and videos must be less than 2 hours. For more information, refer to the product announcement and deep diveOpen external link or refer to the captions documentation to get started.
Fixed a bug where exceptions propagated from JS RPC calls to Durable Objects would lack the .remote property that exceptions from fetch() calls to Durable Objects have.
Page Shield now captures HTTP cookies set and used by your web application. The list of detected cookies in available in the Cloudflare dashboard or via API.
This is the last of the patches that normalize dependencies across the codebase. In this batch: ws, vitest, zod , rimraf, @types/rimraf, ava, source-map, glob, cookie, @types/cookie, @microsoft/api-extractor, @types/mime, @types/yargs, devtools-protocol, @vitest/ui, execa, strip-ansi
This patch also sorts dependencies in every package.json
This is the first of a few expected patches that normalize dependency versions, This normalizes undici, concurrently, @types/node, react, react-dom, @types/react, @types/react-dom, eslint, typescript. There are no functional code changes (but there are a couple of typecheck fixes).
Follow up to https://github.com/cloudflare/workers-sdk/pull/6029Open external link, this normalizes some more dependencies : get-port, chalk, yargs, toucan-js, @typescript-eslint/parser, @typescript-eslint/eslint-plugin, esbuild-register, hono, glob-to-regexp, @cloudflare/workers-types
This patch cleans up warnings we were seeing when doing a full build. Specifically:
fixtures/remix-pages-app had a bunch of warnings about impending features that it should be upgraded to, so I did that. (tbh this one needs a full upgrade of packages, but we’ll get to that later when we’re upgrading across the codebase)
updated @microsoft/api-extractor so it didn’t complain that it didn’t match the typescript version (that we’d recently upgraded)
it also silenced a bunch of warnings when exporting types from wrangler. We’ll need to fix those, but we’ll do that when we work on unstable_dev etc.
workers-playground was complaining about the size of the bundle being generated, so I increased the limit on it
I’ve been experimenting with esbuild 0.21.4 with wrangler. It’s mostly been fine. But I get this warning every time
▲ [WARNING] Import "__INJECT_FOR_TESTING_WRANGLER_MIDDLEWARE__" will always be undefined because there is no matching export in "src/index.ts" [import-is-undefined]
.wrangler/tmp/bundle-Z3YXTd/middleware-insertion-facade.js:8:23:
8 │ .....(OTHER_EXPORTS.__INJECT_FOR_TESTING_WRANGLER_MIDDLEWARE__ ?? []),
╵
This is because esbuild@0.18.5 enabled a warning by default whenever an undefined import is accessed on an imports object. However we abuse imports to inject stuff in middleware.test.ts. A simple fix is to only inject that code in tests.
Added filter operators for scripts and connections
You can now filter scripts and connections in the Cloudflare dashboard using the does not contain operator. Pages associated with scripts and connections can be filtered by includes, starts with, and ends with.
It turns out that ESBuild paths are case insensitive, which can result in path collisions between polyfills for globalThis.performance and globalThis.Performance, etc.
This change ensures that we encode all global names to lowercase and decode them appropriately.
Updated response codes on requests for errored videos
Stream will now return HTTP error status 424 (failed dependency) when requesting segments, manifests, thumbnails, downloads, or subtitles for videos that are in an errored state. Previously, Stream would return one of several 5xx codes for requests like this.
Deprecation announcement for @cf/meta/llama-2-7b-chat-int8
We will be deprecating @cf/meta/llama-2-7b-chat-int8 on 2024-06-30.
Replace the model ID in your code with a new model of your choice:
@cf/meta/llama-3-8b-instruct is the newest model in the Llama family (and is currently free for a limited time on Workers AI).
@cf/meta/llama-3-8b-instruct-awq is the new Llama 3 in a similar precision to your currently selected model. This model is also currently free for a limited time.
If you do not switch to a different model by June 30th, we will automatically start returning inference from @cf/meta/llama-3-8b-instruct-awq.
Customers can now scan their Bitbucket Cloud workspaces for a variety of contextualized security issues such as source code exposure, admin misconfigurations, and more.
DDoS alerts are now available for EU Customer Metadata Boundary (CMB) customers. This includes all DDoS alert type (Standard and Advanced) for both HTTP DDoS attacks and L3/4 DDoS attacks.
Customers using Gateway to filter traffic to Magic WAN destinations will now see traffic from Cloudflare egressing with WARP virtual IP addresses (CGNAT range), rather than public Cloudflare IP addresses. This simplifies configuration and improves visibility for customers.
WAF attack score now automatically detects and decodes Base64 and JavaScript (Unicode escape sequences) in HTTP requests. This update is available for all customers with access to WAF attack score (Business customers with access to a single field and Enterprise customers).
Compatibility improvements to how Hyperdrive interoperates with the popular Postgres.jsOpen external link driver have been released. These improvements allow queries made via Postgres.js to be correctly cached (when enabled) in Hyperdrive.
Developers who had previously set prepare: false can remove this configuration when establishing a new Postgres.js client instance.
In the upgrade window, a change was made to use international date formats to resolve an issue with localization.
Made a change to ensure DEX tests are not running when the tunnel is not up due to the device going to or waking from sleep. This is specific to devices using the S3 power model.
Fixed a known issue where the certificate was not always properly left behind in %ProgramData%\Cloudflare\installed_cert.pem.
Fixed an issue where ICMPv6 Neighbor Solicitation messages were being incorrectly sent on the WARP tunnel.
Known issues:
If a user has an MDM file configured to support multiple profiles (for the switch configurations feature), and then changes to an MDM file configured for a single profile, the WARP client may not connect. The workaround is to use the warp-cli registration delete command to clear the registration, and then re-register the client.
Fixed a known issue where the certificate was not always properly left behind in /Library/Application Support/Cloudflare/installed_cert.pem.
Fixed an issue so that the reauth notification is cleared from the UI when the user switches configurations.
Fixed an issue by correcting the WARP client setting of macOS firewall rules. This relates to TunnelVision (CVE-2024-3661Open external link).
Fixed an issue that could cause the Cloudflare WARP menu bar application to disappear when switching configurations.
Known issues:
If a user has an MDM file configured to support multiple profiles (for the switch configurations feature), and then changes to an MDM file configured for a single profile, the WARP client may not connect. The workaround is to use the warp-cli registration delete command to clear the registration, and then re-register the client.
The new fetch_standard_url compatibility flag will become active by default on June 3rd, 2024 and ensures that URLs passed into the fetch(...) API, the new Request(...) constructor, and redirected requests will be parsed using the standard WHATWG URL parser.
DigestStream is now more efficient and exposes a new bytesWritten property that indicates that number of bytes written to the digest.
The Page Rules migration guide is now available for users interested in transitioning to modern Rules features instead of Page Rules. Explore the guide for detailed instructions on migrating your configurations.
A bug in the fetch API implementation would cause the content type of a Blob to be incorrectly set. The fix is being released behind a new blob_standard_mime_type compatibility flag.
A new GA release for the Android Cloudflare One Agent is now available in the Google Play StoreOpen external link. This release fixes an issue where the user was not prompted to select the client certificate in the browser during Access registration.
A new GA release for the macOS WARP client is now available in the App CenterOpen external link. This releases fixes an issue with how the WARP client sets macOS firewall rules and addresses the TunnelVision (CVE-2024-3661Open external link) vulnerability.
Fixed RPC to/from Durable Objects not honoring the output gate.
The internal_stream_byob_return_view compatibility flag can be used to improve the standards compliance of the ReadableStreamBYOBReader implementation when working with BYOB streams provided by the runtime (like in response.body or request.body). The flag ensures that the final read result will always include a value field whose value is set to an empty Uint8Array whose underlying ArrayBuffer is the same memory allocation as the one passed in on the call to read().
The Web platform standard reportError(err) global API is now available in workers. The reported error will first be emitted as an ’error’ event on the global scope then reported in both the console output and tail worker exceptions by default.
When creating a policy in the dashboard, default directive aggregates suggestions of monitored scripts and connections data, enabling defining default directive easier.
Network Analytics now supported for EU CMB customers
The Network Analytics dashboard is available to customers that have opted in to the EU Customer Metadata Boundary (CMB) solution. This also includes Network Analytics Logs (Logpush) and GraphQL API.
API users can ensure they are routed properly by directing their API requests at eu.api.cloudflare.com.
HTTP API now returns a HTTP 400 error for invalid queries
Previously, D1’s HTTP APIOpen API docs link returned a HTTP 500 Internal Server error for an invalid query. An invalid SQL query now correctly returns a HTTP 400 Bad Request error.
Improve Streams API spec compliance by exposing desiredSize and other properties on stream class prototypes
The new URL.parse(...) method is implemented. This provides an alternative to the URL constructor that does not throw exceptions on invalid URLs.
R2 bindings objects now have a storageClass option. This can be set on object upload to specify the R2 storage class - Standard or Infrequent Access. The property is also returned with object metadata.
Added new AI native binding, you can now run models with const resp = await env.AI.run(modelName, inputs)
Deprecated @cloudflare/ai npm package. While existing solutions using the @cloudflare/ai package will continue to work, no new Workers AI features will be supported.
Moving to native AI bindings is highly recommended
Now that D1 is generally available and production ready, alpha D1 databases are deprecated and should be migrated for better performance, reliability, and ongoing support.
There is no longer an explicit limit on the total amount of data which may be uploaded with Cache API put() per request. Other Cache API Limits continue to apply.
The Web standard ReadableStream.from() API is now implemented. The API enables creating a ReadableStream from a either a sync or async iterable.
D1 is now generally available and production ready. Read the blog postOpen external link for more details on new features in GA and to learn more about the upcoming D1 read replication API.
Developers with a Workers Paid plan now have a 10GB GB per-database limit (up from 2GB), which can be combined with existing limit of 50,000 databases per account.
Developers with a Workers Free plan retain the 500 MB per-database limit and can create up to 10 databases per account.
Durable Objects request billing applies a 20:1 ratio for incoming WebSocket messages. For example, 1 million Websocket received messages across connections would be charged as 50,000 Durable Objects requests.
This is a billing-only calculation and does not impact Durable Objects metrics and analytics.
The new unwrap_custom_thenables compatibility flag enables workers to accept custom thenables in internal APIs that expect a promise (for instance, the ctx.waitUntil(...) method).
TransformStreams created with the TransformStream constructor now have a cancel algorithm that is called when the stream is canceled or aborted. This change is part of the implementation of the WHATWG Streams standard.
Messages published to a queue and/or marked for retry from a queue consumer can now be explicitly delayed. Delaying messages allows you to defer tasks until later, and/or respond to backpressure when consuming from a queue.
Refer to Batching and Retries to learn how to delay messages written to a queue.
Queues now supports pull-based consumers. A pull-based consumer allows you to pull from a queue over HTTP from any environment and/or programming language outside of Cloudflare Workers. A pull-based consumer can be useful when your message consumption rate is limited by upstream infrastructure or long-running tasks.
Customers can now use new fields cf.tls_client_hello_length (the length of the client hello message sent in a TLS handshake), cf.tls_client_random (the value of the 32-byte random value provided by the client in a TLS handshake), and cf.tls_client_extensions_sha1 (the SHA-1 fingerprint of TLS client extensions) in various products built on Ruleset Engine.
Removed dependency on third-party cookies in the isolated browser, fixing an issue that previously caused intermittent disruptions for users maintaining multi-site, cross-tab sessions in the isolated browser.
Origin Rules now allow port numbers in Host Header Override
Customers can now use arbitrary port numbers in Host Header Override in Origin Rules. Previously, only hostname was allowed as a value (for example, example.com). Now, you can set the value to hostname:port (for example, example.com:1234) as well.
Hyperdrive now supports a WRANGLER_HYPERDRIVE_LOCAL_CONNECTION_STRING_<BINDING_NAME> environmental variable for configuring local development to use a test/non-production database, in addition to the localConnectionString configuration in wrangler.toml.
Refer to Local development for instructions on how to configure Hyperdrive locally.
The default content type for messages published to a queue is now json, which improves compatibility with the upcoming pull-based queues.
Any Workers created on or after the compatibility date of 2024-03-18, or that explicitly set the queues_json_messages compatibility flag, will use the new default behaviour. Existing Workers with a compatibility date prior will continue to use v8 as the default content type for published messages.
As of wrangler@3.33.0, wrangler d1 execute and wrangler d1 migrations apply now default to using a local database, to match the default behavior of wrangler dev.
It is also now possible to specify one of --local or --remote to explicitly tell wrangler which environment you wish to run your commands against.
Cloudflare Trace now supports grey-clouded hostnames
Even if the hostname is not proxied by Cloudflare, Cloudflare Trace will now return all the configurations that Cloudflare would have applied to the request.
Built-in APIs that return Promises will now produce stack traces when the Promise rejects. Previously, the rejection error lacked a stack trace.
A new compat flag fetcher_no_get_put_delete removes the get(), put(), and delete() methods on service bindings and Durable Object stubs. This will become the default as of compatibility date 2024-03-26. These methods were designed as simple convenience wrappers around fetch(), but were never documented.
As of 2024-03-05, D1 usage will start to be counted and may incur charges for an account’s future billing cycle.
Developers on the Workers Paid plan with D1 usage beyond included limits will incur charges according to D1’s pricing.
Developers on the Workers Free plan can use up to the included limits. Usage beyond the limits below requires signing up for the $5/month Workers Paid plan.
Explicit retries no longer impact consumer concurrency/scaling.
Calling retry() or retryAll() on a message or message batch will no longer have an impact on how Queues scales consumer concurrency.
Previously, using explicit retries via retry() or retryAll() would count as an error and could result in Queues scaling down the number of concurrent consumers.
Allow users to log in to Access applications with their WARP session identity. Users need to reauthenticate based on default session durations. WARP authentication identity must be turned on in your device enrollment permissions and can be enabled on a per application basis.
A previous change (made on 2024-02-13) to the run()query statement method has been reverted.
run() now returns a D1Result, including the result rows, matching its original behaviour prior to the change on 2024-02-13.
Future change to run() to return a D1ExecResult, as originally intended and documented, will be gated behind a compatibility date as to avoid breaking existing Workers relying on the way run() currently works.
In certain cases, videos uploaded with an HDR colorspace (such as footage from certain mobile devices) appeared washed out or desaturated when played back. This issue is resolved for new uploads.
D1’s raw(), all() and run()query statement methods have been updated to reflect their intended behaviour and improve compatibility with ORM libraries.
raw() now correctly returns results as an array of arrays, allowing the correct handling of duplicate column names (such as when joining tables), as compared to all(), which is unchanged and returns an array of objects. To include an array of column names in the results when using raw(), use raw({columnNames: true}).
run() no longer incorrectly returns a D1Result and instead returns a D1ExecResult as originally intended and documented.
This may be a breaking change for some applications that expected raw() to return an array of objects.
Refer to D1 client API to review D1’s query methods, return types and TypeScript support in detail.
You can define policies in your Connector to either allow traffic to flow between your LANs without it leaving your local premises or to forward it via the Cloudflare network where you can add additional security features.
HTTP API query vectors request and response format change
Vectorize /query HTTP endpoint has the following changes:
returnVectors request body property is deprecated in favor of returnValues and returnMetadata properties.
Response format has changed to the below format to match [Workers API change]:(/workers/configuration/compatibility-dates/#vectorize-query-with-metadata-optionally-returned)
All new Access for SaaS applications have unique Entity IDs. This allows for multiple integrations with the same SaaS provider if required. The unique Entity ID has the application audience tag appended. Existing apps are unchanged.
Databases using D1’s legacy alpha backend will no longer run automated hourly backups. You may still choose to take manual backups of these databases.
The D1 team recommends moving to D1’s new production backend, which will require you to export and import your existing data. D1’s production backend is faster than the original alpha backend. The new backend also supports Time Travel, which allows you to restore your database to any minute in the past 30 days without relying on hourly or manual snapshots.
Vectorize now supports metadata filtering with equals ($eq) and not equals ($neq) operators. Metadata filtering limits query() results to only vectors that fulfill new filter property.
let metadataMatches =await env.YOUR_INDEX.query(queryVector,
{
topK:3,
filter:{ streaming_platform:"netflix"},
returnValues:true,
returnMetadata:true
})
Only new indexes created on or after 2023-12-06 support metadata filtering. Currently, there is no way to migrate previously created indexes to work with metadata filtering.
Vectorize now supports distinct returnMetadata and returnValues arguments when querying an index, replacing the now-deprecated returnVectors argument. This allows you to return metadata without needing to return the vector values, reducing the amount of unnecessary data returned from a query. Both returnMetadata and returnValues default to false.
For example, to return only the metadata from a query, set returnMetadata: true.
let matches =await env.YOUR_INDEX.query(queryVector,{ topK:5, returnMetadata:true})
New Workers projects created on or after 2023-11-08 or that update the compatibility date for an existing project will use the new return type.
HLS output from Cloudflare Stream on-demand videos that use Transport Stream file format now includes a 10 second offset to timestamps. This will have no impact on most customers. A small percentage of customers will see improved playback stability. Caption files were also adjusted accordingly.
Added a direction parameter to all Layer 3 endpoints. Use together with location parameter to filter by origin or
target location timeseries groupsOpen API docs link.
A new usage model called Workers Standard is available for Workers and Pages Functions pricing. This is now the default usage model for accounts that are first upgraded to the Workers Paid plan. Read the blog postOpen external link for more information.
The usage model set in a script’s wrangler.toml will be ignored after an account has opted-in to Workers Standard pricing. It must be configured through the dashboard (Workers & Pages > Select your Worker > Settings > Usage Model).
Workers and Pages Functions on the Standard usage model can set custom CPU limits for their Workers
Real-time Logs: Logs are now real-time, showing logs for the last hour. If you have a need for persistent logs, please let the team know on Discord. We are building out a persistent logs feature for those who want to store their logs for longer.
Providers: Azure OpenAI is now supported as a provider!
Docs: Added Azure OpenAI example.
Bug Fixes: Errors with costs and tokens should be fixed.
Added the crypto_preserve_public_exponent
compatibility flag to correct a wrong type being used in the algorithm field of RSA keys in
the WebCrypto API.
Logs: Logs will now be limited to the last 24h. If you have a use case that requires more logging, please reach out to the team on Discord.
Dashboard: Logs now refresh automatically.
Docs: Fixed Workers AI example in docs and dash.
Caching: Embedding requests are now cacheable. Rate limit will not apply for cached requests.
Bug Fixes: Identical requests to different providers are not wrongly served from cache anymore. Streaming now works as expected, including for the Universal endpoint.
Known Issues: There’s currently a bug with costs that we are investigating.
Fixed a bug in the WebCrypto API where the publicExponent field of the algorithm of RSA keys would have the wrong type. Use the crypto_preserve_public_exponent compatibility flag to enable the new behavior.
Queue consumers can now scale to 20 concurrent invocations (per queue), up from 10. This allows you to scale out and process higher throughput queues more quickly.
You can now create up to 100 Vectorize indexes per account. Read the limits documentation for details on other limits, many of which will increase during the beta period.
D1 is now in public beta, and storage limits have been increased:
Developers with a Workers Paid plan now have a 2 GB per-database limit (up from 500 MB) and can create 25 databases per account (up from 10). These limits will continue to increase automatically during the public beta.
Developers with a Workers Free plan retain the 500 MB per-database limit and can create up to 10 databases per account.
Databases must be using D1’s new storage subsystem to benefit from the increased database limits.
Vectorize, Cloudflare’s vector database, is now in open betaOpen external link. Vectorize allows you to store and efficiently query vector embeddings from AI/ML models from Workers AI, OpenAI, and other embeddings providers or machine-learning workflows.
Low-Latency HTTP Live Streaming (LL-HLS) is now in open beta. Enable LL-HLS on your live input for automatic low-latency playback using the Stream built-in player where supported.
D1 now returns a count of rows_written and rows_read for every query executed, allowing you to assess the cost of query for both pricing and index optimization purposes.
The meta object returned in D1’s Client API contains a total count of the rows read (rows_read) and rows written (rows_written) by that query. For example, a query that performs a full table scan (for example, SELECT * FROM users) from a table with 5000 rows would return a rows_read value of 5000:
"meta":{
"duration":0.20472300052642825,
"size_after":45137920,
"rows_read":5000,
"rows_written":0
}
Refer to D1 pricing documentation to understand how reads and writes are measured. D1 remains free to use during the alpha period.
Stopped collecting data in the old Layer 3 data source.
Updated Layer 3
timeseriesOpen API docs link endpoint
to start using the new Layer 3 data source by default, fetching the old data source now requires sending the parameter
metric=bytes_old.
Deprecated Layer 3
summaryOpen API docs link endpoint, this will stop
receiving data after 2023-08-14.
Users can now complete conditional multipart publish operations. When a condition failure occurs when publishing an upload, the upload is no longer available and is treated as aborted.
You can now bind a D1 database to your Workers directly in the Cloudflare dashboardOpen external link. To bind D1 from the Cloudflare dashboard, select your Worker project -> Settings -> Variables -> and select D1 Database Bindings.
Note: If you have previously deployed a Worker with a D1 database binding with a version of wrangler prior to 3.5.0, you must upgrade to wrangler v3.5.0Open external link first before you can edit your D1 database bindings in the Cloudflare dashboard. New Workers projects do not have this limitation.
Legacy D1 alpha users who had previously prefixed their database binding manually with __D1_BETA__ should remove this as part of this upgrade. Your Worker scripts should call your D1 database via env.BINDING_NAME only. Refer to the latest D1 getting started guide for best practices.
We recommend all D1 alpha users begin using wrangler 3.5.0 (or later) to benefit from improved TypeScript types and future D1 API improvements.
Stream now supports adding a scheduled deletion date to new and existing videos. Live inputs support deletion policies for automatic recording deletion.
Databases using D1’s new storage subsystem can now grow to 500 MB each, up from the previous 100 MB limit. This applies to both existing and newly created databases.
Updated HTTP timeseries endpoints urls
to timeseries_groups (exampleOpen API docs link)
due to consistency. Old timeseries endpoints are still available, but will soon be removed.
Databases created via the Cloudflare dashboard and Wrangler (as of v3.4.0) now use D1’s new storage subsystem by default. The new backend can be 6 - 20x fasterOpen external link than D1’s original alpha backend.
To understand which storage subsystem your database uses, run wrangler d1 info YOUR_DATABASE and inspect the version field in the output.
Databases with version: beta use the new storage backend and support the Time Travel API. Databases with version: alpha only use D1’s older, legacy backend.
Time Travel is now available. Time Travel allows you to restore a D1 database back to any minute within the last 30 days (Workers Paid plan) or 7 days (Workers Free plan), at no additional cost for storage or restore operations.
Refer to the Time Travel documentation to learn how to travel backwards in time.
Improved performance for ranged reads on very large files. Previously ranged reads near the end of very large files would be noticeably slower than
ranged reads on smaller files. Performance should now be consistently good independent of filesize.
New documentation has been published on how to use D1’s support for generated columns to define columns that are dynamically generated on write (or read). Generated columns allow you to extract data from JSON objects or use the output of other SQL functions.
Fixed a bug where calling GetBucketOpen API docs link on a non-existent bucket would return a 500 instead of a 404.
Improved S3 compatibility for ListObjectsV1, now nextmarker is only set when truncated is true.
The R2 worker bindings now support parsing conditional headers with multiple etags. These etags can now be strong, weak or a wildcard. Previously the bindings only accepted headers containing a single strong etag.
S3 putObject now supports sha256 and sha1 checksums. These were already supported by the R2 worker bindings.
CopyObject in the S3 compatible api now supports Cloudflare specific headers which allow the copy operation to be conditional on the state of the destination object.
To facilitate a transition from the previous Error.cause behaviour, detailed error messages will continue to be populated within Error.cause as well as the top-level Error object until approximately July 14th, 2023. Future versions of both wrangler and the D1 client API will no longer populate Error.cause after this date.
Following an update to the WHATWG URL specOpen external link, the delete() and has() methods of the URLSearchParams class now accept an optional second argument to specify the search parameter’s value. This is potentially a breaking change, so it is gated behind the new urlsearchparams_delete_has_value_arg and url_standard compatibility flags.
A new Hibernatable WebSockets API
(beta) has been added to Durable Objects. The Hibernatable
WebSockets API allows a Durable Object that is not currently running an event
handler (for example, processing a WebSocket message or alarm) to be removed from
memory while keeping its WebSockets connected (“hibernation”). A Durable Object
that hibernates will not incur billable Duration (GB-sec) charges.
D1 has a new experimental storage back end that dramatically improves query throughput, latency and reliability. The experimental back end will become the default back end in the near future. To create a database using the experimental backend, use wrangler and set the --experimental-backend flag when creating a database:
You can now provide a location hint when creating a D1 database, which will influence where the leader (writer) is located. By default, D1 will automatically create your database in a location close to where you issued the request to create a database. In most cases this allows D1 to choose the optimal location for your database on your behalf.
New documentation has been published that covers D1’s extensive JSON function support. JSON functions allow you to parse, query and modify JSON directly from your SQL queries, reducing the number of round trips to your database, or data queried.
The V2 build system is now available in open beta. Enable the V2 build system by going to your Pages project in the Cloudflare dashboard and selecting Settings > Build & deploymentsOpen external link > Build system version.
The new connect() method allows you to connect to any TCP-speaking services directly from your Workers. To learn more about other protocols supported on the Workers platform, visit the new Protocols documentation.
We have added new native database integrations for popular serverless database providers, including Neon, PlanetScale, and Supabase. Native integrations automatically handle the process of creating a connection string and adding it as a Secret to your Worker.
You can now also connect directly to databases over TCP from a Worker, starting with PostgreSQL. Support for PostgreSQL is based on the popular pg driver, and allows you to connect to any PostgreSQL instance over TLS from a Worker directly.
The R2 Migrator (Super Slurper), which automates the process of migrating from existing object storage providers to R2, is now Generally Available.
Cursor, an experimental AI assistant, trained to answer
questions about Cloudflare’s Developer Platform, is now available to preview!
Cursor can answer questions about Workers and the Cloudflare Developer Platform,
and is itself built on Workers. You can read more about Cursor in the announcement
blogOpen external link.
The new nodeJsCompatModule type can be used with a Worker bundle to emulate a Node.js environment. Common Node.js globals such as process and Buffer will be present, and require('...') can be used to load Node.js built-ins without the node: specifier prefix.
Fixed an issue where websocket connections would be disconnected when updating workers. Now, only websockets connected to Durable Object instances are disconnected by updates to that Durable Object’s code.
Cloudflare Stream now supports player enhancement properties.
With player enhancements, you can modify your video player to incorporate elements of your branding, such as your logo, and customize additional options to present to your viewers.
For more, refer to the documentation to get started.
URL.canParse(...) is a new standard API for testing that an input string can be parsed successfully as a URL without the additional cost of creating and throwing an error.
The Workers-specific IdentityTransformStream and FixedLengthStream classes now support specifying a highWaterMark for the writable-side that is used for backpressure signaling using the standard writer.desiredSize/writer.ready mechanisms.
Queue consumers will now automatically scale up based on the number of messages being written to the queue. To control or limit concurrency, you can explicitly define a max_concurrency for your consumer.
Fixed a bug in Wrangler tail and live logs on the dashboard that
prevented the Administrator Read-Only and Workers Tail Read roles from successfully
tailing Workers.
Previously, generating a download for a live recording exceeding four hours resulted in failure.
To fix the issue, now video downloads are only available for live recordings under four hours. Live recordings exceeding four hours can still be played but cannot be downloaded.
Queue consumers will soon automatically scale up concurrently as a queues’ backlog grows in order to keep overall message processing latency down. Concurrency will be enabled on all existing queues by 2023-03-28.
To opt-out, or to configure a fixed maximum concurrency, set max_concurrency = 1 in your wrangler.toml file or via the queues dashboardOpen external link.
To opt-in, you do not need to take any action: your consumer will begin to scale out as needed to keep up with your message backlog. It will scale back down as the backlog shrinks, and/or if a consumer starts to generate a higher rate of errors. To learn more about how consumers scale, refer to the consumer concurrency documentation.
This allows you to mark a message as delivered as you process it within a batch, and avoids the entire batch from being redelivered if your consumer throws an error during batch processing. This can be particularly useful when you are calling external APIs, writing messages to a database, or otherwise performing non-idempotent actions on individual messages within a batch.
Fixed a bug where transferring large request bodies to a Durable Object was unexpectedly slow.
Previously, an error would be thrown when trying to access unimplemented standard Request and Response properties. Now those will be left as undefined.
IPv6 percentage started to be calculated as (IPv6 requests / requests for dual-stacked content), where as before it
was calculated as (IPv6 requests / IPv4+IPv6 requests).
Added new Layer 3 data source and related endpoints.
Updated Layer 3
timeseriesOpen API docs link endpoint
to support fetching both current and new data sources. For retro-compatibility
reasons, fetching the new data source requires sending the parameter metric=bytes else the current data
source will be returned.
Earlier detection (and rejection) of non-video uploads
Cloudflare Stream now detects non-video content on upload using the POST API and returns a 400 Bad Request HTTP error with code 10059.
Previously, if you or one of your users attempted to upload a file that is not a video (ex: an image), the request to upload would appear successful, but then fail to be encoded later on.
With this change, Stream responds to the upload request with an error, allowing you to give users immediate feedback if they attempt to upload non-video content.
Queues now allows developers to create up to 100 queues per account, up from the initial beta limit of 10 per account. This limit will continue to increase over time.
You can now deep-link to a Pages deployment in the dashboard with :pages-deployment. An example would be https://dash.cloudflare.com?to=/:account/pages/view/:pages-project/:pages-deployment.
The “per-video” analytics API is being deprecated. If you still use this API, you will need to switch to using the GraphQL Analytics API by February 1, 2023. After this date, the per-video analytics API will be no longer available.
The GraphQL Analytics API provides the same functionality and more, with additional filters and metrics, as well as the ability to fetch data about multiple videos in a single request. Queries are faster, more reliable, and built on a shared analytics system that you can use across many Cloudflare products.
Cloudflare Stream now has no limit on the number of live inputsOpen API docs link you can create. Stream is designed to allow your end-users to go live — live inputs can be created quickly on-demand via a single API request for each of user of your platform or app.
For more on creating and managing live inputs, get started with the docs.
Multipart upload part sizes are always expected to be of the same size, but this enforcement is now done when you complete an upload instead of being done very time you upload a part.
Fixed a performance issue where concurrent multipart part uploads would get rejected.
More accurate bandwidth estimates for live video playback
When playing live video, Cloudflare Stream now provides significantly more accurate estimates of the bandwidth needs of each quality level to client video players. This ensures that live video plays at the highest quality that viewers have adequate bandwidth to play.
As live video is streamed to Cloudflare, we transcode it to make it available to viewers at multiple quality levels. During transcoding, we learn about the real bandwidth needs of each segment of video at each quality level, and use this to provide an estimate of the bandwidth requirements of each quality level the in HLS (.m3u8) and DASH (.mpd) manifests.
If a live stream contains content with low visual complexity, like a slideshow presentation, the bandwidth estimates provided in the HLS manifest will be lower, ensuring that the most viewers possible view the highest quality level, since it requires relatively little bandwidth. Conversely, if a live stream contains content with high visual complexity, like live sports with motion and camera panning, the bandwidth estimates provided in the HLS manifest will be higher, ensuring that viewers with inadequate bandwidth switch down to a lower quality level, and their playback does not buffer.
This change is particularly helpful if you’re building a platform or application that allows your end users to create their own live streams, where these end users have their own streaming software and hardware that you can’t control. Because this new functionality adapts based on the live video we receive, rather than just the configuration advertised by the broadcaster, even in cases where your end users’ settings are less than ideal, client video players will not receive excessively high estimates of bandwidth requirements, causing playback quality to decrease unnecessarily. Your end users don’t have to be OBS Studio experts in order to get high quality video playback.
No work is required on your end — this change applies to all live inputs, for all customers of Cloudflare Stream. For more, refer to the docs.
Updated to report new metrics such as time to first byte (TTFB), interaction to next paint (INP), and first contentful paint (FCP). Additionally, it reports navigator.webdriver, server-timing header (experimental), and protocol info (nextHopProtocol).
You can now deep-link to a Pages project in the dashboard with :pages-project. An example would be https://dash.cloudflare.com?to=/:account/pages/view/:pages-project.
AV1 Codec support for live streams and recordings (beta)
Cloudflare Stream now supports playback of live videos and live recordings using the AV1 codecOpen external link, which uses 46% less bandwidth than H.264.
CORS preflight responses and adding CORS headers for other responses is now implemented for S3 and public buckets. Currently, the only way to configure CORS is via the S3 API.
Fixup for bindings list truncation to work more correctly when listing keys with custom metadata that have " or when some keys/values contain certain multi-byte UTF-8 values.
The S3 GetObject operation now only returns Content-Range in response to a ranged request.
The R2 put() binding options can now be given an onlyIf field, similar to get(), that performs a conditional upload.
The R2 delete() binding now supports deleting multiple keys at once.
The R2 put() binding now supports user-specified SHA-1, SHA-256, SHA-384, SHA-512 checksums in options.
User-specified object checksums will now be available in the R2 get() and head() bindings response. MD5 is included by default for non-multipart uploaded objects.
Manually control when you start and stop simulcasting
You can now enable and disable individual live outputs via the API or Stream dashboard, allowing you to control precisely when you start and stop simulcasting to specific destinations like YouTube and Twitch. For more, read the docs.
The S3 DeleteObjects operation no longer trims the space from around the keys before deleting. This would result in files with leading / trailing spaces not being able to be deleted. Additionally, if there was an object with the trimmed key that existed it would be deleted instead. The S3 DeleteObject operation was not affected by this.
Fixed presigned URL support for the S3 ListBuckets and ListObjects operations.
URLs in the Stream Dashboard and Stream API now use a subdomain specific to your Cloudflare Account: customer-{CODE}.cloudflarestream.com. This change allows you to:
Use Content Security PolicyOpen external link (CSP) directives specific to your Stream subdomain, to ensure that only videos from your Cloudflare account can be played on your website.
Allowlist only your Stream account subdomain at the network-level to ensure that only videos from a specific Cloudflare account can be accessed on your network.
No action is required from you, unless you use Content Security Policy (CSP) on your website. For more on CSP, read the docs.
Uploads will automatically infer the Content-Type based on file body
if one is not explicitly set in the PutObject request. This functionality will
come to multipart operations in the future.
Added dummy implementation of the following operation that mimics
the response that a basic AWS S3 bucket will return when first created: GetBucketAcl.
Fixed an S3 compatibility issue for error responses with MinIO .NET SDK and any other tooling that expects no xmlns namespace attribute on the top-level Error tag.
List continuation tokens prior to 2022-07-01 are no longer accepted and must be obtained again through a new list operation.
The list() binding will now correctly return a smaller limit if too much data would otherwise be returned (previously would return an Internal Error).
Improvements to 500s: we now convert errors, so things that were previously concurrency problems for some operations should now be TooMuchConcurrency instead of InternalError. We’ve also reduced the rate of 500s through internal improvements.
ListMultipartUpload correctly encodes the returned Key if the encoding-type is specified.
S3 XML documents sent to R2 that have an XML declaration are not rejected with 400 Bad Request / MalformedXML.
Minor S3 XML compatibility fix impacting Arq Backup on Windows only (not the Mac version). Response now contains XML declaration tag prefix and the xmlns attribute is present on all top-level tags in the response.
Support the r2_list_honor_include compat flag coming up in an upcoming runtime release (default behavior as of 2022-07-14 compat date). Without that compat flag/date, list will continue to function implicitly as include: ['httpMetadata', 'customMetadata'] regardless of what you specify.
cf-create-bucket-if-missing can be set on a PutObject/CreateMultipartUpload request to implicitly create the bucket if it does not exist.
Fix S3 compatibility with MinIO client spec non-compliant XML for publishing multipart uploads. Any leading and trailing quotes in CompleteMultipartUpload are now optional and ignored as it seems to be the actual non-standard behavior AWS implements.
Pages now supports .dev.vars in wrangler pages, which allows you to use use environmental variables during your local development without chaining --envs.
This functionality requires Wrangler v2.0.16 or higher.
Unsupported search parameters to ListObjects/ListObjectsV2 are
now rejected with 501 Not Implemented.
Fixes for Listing:
Fix listing behavior when the number of files within a folder exceeds the limit (you’d end
up seeing a CommonPrefix for that large folder N times where N = number of children
within the CommonPrefix / limit).
Fix corner case where listing could cause
objects with sharing the base name of a "folder" to be skipped.
Fix listing over some files that shared a certain common prefix.
DeleteObjects can now handle 1000 objects at a time.
S3 CreateBucket request can specify x-amz-bucket-object-lock-enabled with a value of false and not have the requested rejected with a NotImplemented
error. A value of true will continue to be rejected as R2 does not yet support
object locks.
We now keep track of the files that make up each deployment and intelligently only upload the files that we have not seen. This means that similar subsequent deployments should only need to upload a minority of files and this will hopefully make uploads even faster.
This functionality requires Wrangler v2.0.11 or higher.
Fixed a bug where the S3 API’s PutObject or the .put() binding could fail but still show the bucket upload as successful.
If conditional headersOpen external link are provided to S3 API UploadObject or CreateMultipartUpload operations, and the object exists, a 412 Precondition Failed status code will be returned if these checks are not met.
Add support for S3 virtual-hosted style pathsOpen external link, such as <BUCKET>.<ACCOUNT_ID>.r2.cloudflarestorage.com instead of path-based routing (<ACCOUNT_ID>.r2.cloudflarestorage.com/<BUCKET>).
Implemented GetBucketLocation for compatibility with external tools, this will always return a LocationConstraint of auto.
During or after uploading a video to Stream, you can now specify a value for a new field, creator. This field can be used to identify the creator of the video content, linking the way you identify your users or creators to videos in your Stream account. For more, read the blog postOpen external link.
When using the S3 API, an empty string and us-east-1 will now alias to the auto region for compatibility with external tools.
GetBucketEncryption, PutBucketEncryption and DeleteBucketEncrypotion are now supported (the only supported value currently is AES256).
Unsupported operations are explicitly rejected as unimplemented rather than implicitly converting them into ListObjectsV2/PutBucket/DeleteBucket respectively.
S3 API CompleteMultipartUploads requests are now properly escaped.
Pagination cursors are no longer returned when the keys in a bucket is the same as the MaxKeys argument.
The S3 API ListBuckets operation now accepts cf-max-keys, cf-start-after and cf-continuation-token headers behave the same as the respective URL parameters.
The S3 API ListBuckets and ListObjects endpoints now allow per_page to be 0.
The S3 API CopyObject source parameter now requires a leading slash.
The S3 API CopyObject operation now returns a NoSuchBucket error when copying to a non-existent bucket instead of an internal error.
Enforce the requirement for auto in SigV4 signing and the CreateBucketLocationConstraint parameter.
The S3 API CreateBucket operation now returns the proper location response header.
The Stream Dashboard now has an analytics panel that shows the number of minutes of both live and recorded video delivered. This view can be filtered by Creator ID, Video UID, and Country. For more in-depth analytics data, refer to the bulk analytics documentation.
Custom letterbox color configuration option for Stream Player
The Stream Player can now be configured to use a custom letterbox color, displayed around the video (’letterboxing’ or ‘pillarboxing’) when the video’s aspect ratio does not match the player’s aspect ratio. Refer to the documentation on configuring the Stream Player here.
Cloudflare Stream now supports the SRT live streaming protocol. SRT is a modern, actively maintained streaming video protocol that delivers lower latency, and better resilience against unpredictable network conditions. SRT supports newer video codecs and makes it easier to use accessibility features such as captions and multiple audio tracks.
When viewers manually change the resolution of video they want to receive in the Stream Player, this change now happens immediately, rather than once the existing resolution playback buffer has finished playing.
DASH and HLS manifest URLs accessible in Stream Dashboard
If you choose to use a third-party player with Cloudflare Stream, you can now easily access HLS and DASH manifest URLs from within the Stream Dashboard. For more about using Stream with third-party players, read the docs here.
When a live input is connected, the Stream Dashboard now displays technical details about the connection, which can be used to debug configuration issues.
Webhook notifications for live stream connections events
You can now configure Stream to send webhooks each time a live stream connects and disconnects. For more information, refer to the Webhooks documentation.
You can now start and stop live broadcasts without having to provide a new video UID to the Stream Player (or your own player) each time the stream starts and stops. Read the docs.
When using the automatic installation feature of the JavaScript Beacon (available only to customers proxied through Cloudflare - also known as orange-clouded customers),
Subresource Integrity (SRI)Open external link is now enabled by default. SRI is a security feature that enables browsers to
verify that resources they fetch are delivered without unexpected manipulation.
Once a live video has ended and been recorded, you can now give viewers the option to download an MP4 video file of the live recording. For more, read the docs here.
The Stream Player now displays preview images when viewers hover their mouse over the seek bar, making it easier to skip to a specific part of a video.
All Cloudflare Stream customers can now give viewers the option to download videos uploaded to Stream as an MP4 video file. For more, read the docs here.
You can now opt-in to the Stream Connect beta, and use Cloudflare Stream to restream live video to any platform that accepts RTMPS input, including Facebook, YouTube and Twitch.
You can now use Cloudflare Stream to restream or simulcast live video to any platform that accepts RTMPS input, including Facebook, YouTube and Twitch.
Improved client bandwidth hints for third-party video players
If you use Cloudflare Stream with a third party player, and send the clientBandwidthHint parameter in requests to fetch video manifests, Cloudflare Stream now selects the ideal resolution to provide to your client player more intelligently. This ensures your viewers receive the ideal resolution for their network connection.
Improved client bandwidth hints for third-party video players
If you use Cloudflare Stream with a third party player, and send the clientBandwidthHint parameter in requests to fetch video manifests, Cloudflare Stream now selects the ideal resolution to provide to your client player more intelligently. This ensures your viewers receive the ideal resolution for their network connection.
Cloudflare Stream now delivers video using 3-10x less bandwidth, with no reduction in quality. This ensures faster playback for your viewers with less buffering, particularly when viewers have slower network connections.
Videos with multiple audio tracks (ex: 5.1 surround sound) are now mixed down to stereo when uploaded to Stream. The resulting video, with stereo audio, is now playable in the Stream Player.