8.0 Migration Guide

In this document, we will see the major steps to migrate from 7.x to 8.0.

Please check the changelog for a detailed list of fixes, changes, and improvements introduced in 8.0.

Dependencies upgrade

Dependencies have been upgraded to major versions. Check the changelog for a detailed list.

Gradle has also been upgraded to a newer version. Upgrade the Gradle Wrapper to benefit from new features and improvements: ./gradlew wrapper --gradle-version=8.14.3 && ./gradlew wrapper.

Check Gradle migration to update your builds: Upgrading your build from Gradle 8.x to the latest

Java 11 to Java 21

Java 21 (LTS) is now our minimal version to build and run applications.

Install Java 21 and then increase the Java version number in the build.gradle file:

build.gradle
allprojects {

  java {
    toolchain {
      languageVersion = JavaLanguageVersion.of(21)
    }
  }

}

Also, make sure to use JDK 21 in the IDE as well as in your terminal (if needed).

See the JDK 21 migration guide and JDK 21 release notes for details.

Java EE to Jakarta EE

With version 8.0, we are migrating to Jakarta EE 10+, which involves a significant namespace change from javax.* to jakarta.* and an upgrade of all the dependencies to be compliant with the new Jakarta EE namespace. This change affects the entire application and requires careful attention during the migration process.

To migrate your application, you first need to change references in class imports, configuration files, etc. For an automated process, one option is to use the Eclipse Transformer. The most laborious process will then be to update your application to work with the new versions of the dependencies.

See the Jakarta EE Platform Specification for details.

The following links talk more about the Jakarta EE transition: Jarkata EE blog post, Oracle blog post.

Hibernate 5.6 to Hibernate 6.6

Hibernate 6.6 is compliant with Jakarta Persistence 3.1 and is a major part of the Jakarta EE migration. This will require the most careful attention during the migration process of your application. Here are some of the major changes:

Schema Changes

Column Types (on PostgreSQL)

Column Type

Hibernate 5.x

Hibernate 6.6

Notes

binary

oid

bytea

@Lob not used anymore

decimal

numeric(19, 2)

numeric(38, 2)

Default precision has changed

datetime

timestamp

timestamp(6)

Explicit default precision (no difference)

datetime (tz)

timestamp

timestamptz(6)

Timezone and offset storage

time

time

time(6)

Explicit default precision (no difference)

Query Changes

Special properties on plural attributes have been replaced by function syntax

Before
SELECT p FROM Person p WHERE p.addresses.size > 2
After
SELECT p FROM Person p WHERE size(p.addresses) > 2

DISTINCT is always passed to the SQL query to filter out parent entity duplicates

Before
SELECT DISTINCT p FROM Person p JOIN FETCH p.addresses
After
SELECT p FROM Person p JOIN FETCH p.addresses

Comparing an entity directly to a literal is no longer allowed

Before
SELECT e from MyEntity e WHERE e = 123
After
SELECT e from MyEntity e WHERE e.id = 123

The FROM token is disallowed in UPDATE statements

Before
UPDATE FROM MyEntity e SET e.attr = null
After
UPDATE MyEntity e SET e.attr = null

NULL comparisons using = and <>/!= have been removed

Before
SELECT e from MyEntity e WHERE e.attr = NULL
After
SELECT e from MyEntity e WHERE e.attr IS NULL

Native query ordinal parameter binding is 1-based instead of 0-based

Before
s.createQuery("select p from Parent p where id in ?0", Parent.class);
query.setParameter(0, Arrays.asList(0, 1, 2, 3));
After
s.createQuery("select p from Parent p where id in ?1", Parent.class);
query.setParameter(1, Arrays.asList(0, 1, 2, 3));

Query streams need to be explicitly closed

Before
Stream<MyEntity> stream = query.stream();
// Use stream...
// Stream automatically closed
After
try (Stream<MyEntity> stream = query.stream()) {
    // Use stream...
}
// Stream automatically closed after try block

Stricter type checking for literals in field comparisons

Before
-- Literal type could be coerced for the comparison
SELECT e FROM MyEntity e WHERE e.id = '123'
After
-- Use the correct type for the literal
SELECT e FROM MyEntity e WHERE e.id = 123

-- Or use a parameter
SELECT e FROM MyEntity e WHERE e.id = :entityId

Stricter parameter type binding

A consequence of strict type binding is that the query parser will not assign different parameter types to the same named parameter. For example, using a single named parameter for both IS NULL check and an IN clause causes a type mismatch and falls back to column type:

Before
SELECT e FROM MyEntity e WHERE :names IS NULL OR e.name IN :names
var names = List.of("a", "b");
// IllegalArgumentException since Hibernate 6
// Trying to coerce names to String instead of Collection<String>
query.setParameter("names", ObjectUtils.isEmpty(names) ? null : names);
After
SELECT e FROM MyEntity e WHERE :isNamesEmpty = TRUE OR e.name IN :names
var names = List.of("a", "b");
// Use separate parameters
query.setParameter("isNamesEmpty", ObjectUtils.isEmpty(names));
query.setParameter("names", names);

Other Notable Changes

Hibernate 6 supports automatic coercion of single-value parameters

// `credit` is a decimal field.
var qlString = "SELECT self FROM Contact self WHERE self.credit = :credit";
var credit = "2.5";
var query = JPA.em().createQuery(qlString, Contact.class);
// Hibernate 5 throws IllegalArgumentException.
// Hibernate 6 can coerce single value.
query.setParameter("credit", credit);
// Hibernate 6 will return results.
var result = query.getResultList();

Hibernate 6 changes behavior for multi-value parameter coercion

// `credit` is a decimal field.
var qlString = "SELECT self FROM Contact self WHERE self.credit IN :credits";
var credits = new ArrayList<String>();
credits.add(null);
credits.add("");
credits.add("2.5");
var query = JPA.em().createQuery(qlString, Contact.class);
// Hibernate 5 throws IllegalArgumentException.
// Hibernate 6 cannot coerce multi value and does not throw IllegalArgumentException.
query.setParameter("credits", credits);
// Hibernate 6 throws NumberFormatException.
var result = query.getResultList();

Hibernate 6 changes behavior when handling null values in collections for cached queries

var qlString = "SELECT self FROM Contact self WHERE self.id IN :ids";
var ids = new ArrayList<Long>();
ids.add(null);
ids.add(1L);
ids.add(2L);
var query = JPA.em().createQuery(qlString, Contact.class);
query.setHint(AvailableHints.HINT_CACHEABLE, true);
query.setParameter("ids", ids);
// Hibernate 5 doesn't fail because of null in collection.
// Hibernate 6 throws AssertionError because of null in collection when caching is enabled.
var result = query.getResultList();

Hibernate Event Listeners

As part of the upgrade to Hibernate 6.6, we have migrated audit support from Hibernate Interceptor to Hibernate Event Listeners. This change addresses issues with accessing the current transaction’s session, which was problematic in scenarios where the session was created outside of the dependency injection context. Using Event Listeners allows access to the current session from the event source, ensuring more reliable operation across different contexts.

Additionally, a new feature has been introduced that allows developers to register their own custom Hibernate event listeners. If you were using a custom Hibernate interceptor, we encourage you to migrate to event listeners as well, in order to address these issues. For details, refer to Hibernate Event Listeners Documentation.

Guice 5.1 to Guice 7.0

Guice 7.0 supports the Jakarta EE namespace and is part of the Jakarta EE migration. Compared to previous versions, it has completely dropped support for the javax.* namespace.

See the Guice 7.0.0 release notes for details.

RESTEasy 4.7 to RESTEasy 6.2

RESTEasy 6.2 is compliant with Jakarta RESTful Web Services 3.1 and is part of the Jakarta EE migration.

See the RESTEasy 6.2 user guide for details.

Tomcat 9 to Tomcat 10.1

Apache Tomcat 10.1 is compliant with Jakarta Servlet 6.0 and is part of the Jakarta EE migration. Apache Tomcat version 9 is no longer supported.

Shiro 1.13 to Shiro 2.0

Password Hashing Changes

As part of the upgrade to Apache Shiro 2, we have transitioned from the SHA-512 hashing algorithm to the new default, Argon2id. Argon2id is a state-of-the-art password hashing algorithm that offers enhanced protection against modern attack vectors.

Argon2id hashing will be used for new users and for existing users when they change their password. Users with SHA-512 hashes will continue to be able to log in. However, to ensure all user passwords are secured with Argon2id, you may want to enforce a password change for users with legacy hashes:

UPDATE auth_user SET force_password_change = TRUE WHERE password LIKE '$shiro1$%';

This will prompt affected users to change their password upon their next login. Argon2id hashing will automatically be applied to their new password.

Session Management Changes

We have switched from servlet-container sessions to Shiro native sessions. This change enables the use of Redis/Valkey server as a session store and simplifies the overall architecture by leveraging Shiro’s SessionDAO.

Key changes to be aware of:

  • Migration from HttpSession to Shiro’s native org.apache.shiro.session.Session: if you are using HttpServletRequest.getSession(), you need to update your code to use SecurityUtils.getSubject().getSession() instead.

  • By default, the session manager now uses in-memory Caffeine cache. This means that sessions are not persisted between application restarts.

  • HttpSessionListener is no longer used. Instead, you can access active sessions via AuthSessionService.getActiveSessions() which uses the SessionDAO.

For more details about Shiro’s session management, see the Shiro Session Management documentation.

API key authentication

API key authentication allows clients to authenticate API requests without maintaining a session. This is particularly useful for server-to-server communication and automated scripts. See usage here.

Run the following SQL script to create the table :

create table auth_user_token (
    id           bigint not null primary key,
    archived     boolean,
    version      integer,
    created_on   timestamp(6),
    updated_on   timestamp(6),
    expires_at   timestamp(6) not null,
    last_used_at timestamp(6),
    name         varchar(255) not null,
    token_digest varchar(255) not null,
    token_key    varchar(255) not null constraint uk_2yewhucjnwii7ljwbnt3bj3ll unique,
    created_by   bigint constraint fk_o7822fne5ugastp3rtdqom43v references auth_user,
    updated_by   bigint constraint fk_3yyw8xkkmkygcjayajg21jfvr references auth_user,
    owner        bigint       not null constraint fk_3nsst639s8kn304497trbed5q references auth_user
);

create index auth_user_token_owner_idx on auth_user_token (owner);

To make API key authentication available for users, permissions must be set on com.axelor.auth.db.UserToken object with domain self.owner = ? and domain parameter __user__, with at least create, read, write, and remove accesses.

Multi-factor authentication

A new com.axelor.auth.db.MFA entity is used to store multi-factor authentication configuration for a user.

Run the following SQL script to create the table:

CREATE TABLE auth_mfa (
    id bigint NOT NULL,
    archived bool NULL,
    "version" integer NULL,
    created_on timestamp(6) NULL,
    updated_on timestamp(6) NULL,
    default_method integer NULL,
    email varchar(255) NULL,
    email_code varchar(255) NULL,
    email_code_expires_at timestamp(6) NULL,
    enabled bool NULL,
    is_email_validated bool NULL,
    is_totp_validated bool NULL,
    recovery_codes text NULL,
    totp_secret varchar(255) NULL,
    created_by bigint NULL,
    updated_by bigint NULL,
    "owner" bigint NOT NULL,
    CONSTRAINT auth_mfa_pkey PRIMARY KEY (id),
    CONSTRAINT uk_qlaks9iymof66mwqotodtpyg2 UNIQUE (owner),
    CONSTRAINT fk_2yt0vnr9h8h8sxg1co544m64q FOREIGN KEY (created_by) REFERENCES auth_user(id),
    CONSTRAINT fk_o5nfu3rveqcse0hmkd54r5m4p FOREIGN KEY (updated_by) REFERENCES auth_user(id),
    CONSTRAINT fk_qlaks9iymof66mwqotodtpyg2 FOREIGN KEY ("owner") REFERENCES auth_user(id)
);
CREATE INDEX auth_mfa_owner_idx ON auth_mfa USING btree (owner);

To make multi-factor authentication available for users, permissions must be set on com.axelor.auth.db.MFA object with domain self.owner = ? and domain parameter __user__, with at least create, read, and write accesses.

S3-Compatible Object Storage

We now support an S3-compatible object storage service for storing uploaded files.

The default implementation uses disk storage using the existing data.upload.dir property. Object storage can be activated by configuring the data.object-storage.* properties.

Make sure you use com.axelor.meta.MetaFiles service and the new com.axelor.file.store.FileStoreFactory instead of assuming disk storage and directly working with the file system.

Example:

// ❌ Old code directly working with the file system.

String filePath = metaFile.getFilePath();
Path inputPath = MetaFiles.getPath(filePath);

if (Files.exists(inputPath)) {
  try (InputStream inputStream = Files.newInputStream(inputPath)) {
    // ...
  }
}
// ✅ New code using `com.axelor.file.store.FileStoreFactory`.

Store store = FileStoreFactory.getStore();

// File path can be either on file system or in object storage.
String filePath = metaFile.getFilePath();

// Use store method to check if the file exists.
if (store.hasFile(filePath)) {
  // Use store method to get the file stream.
  try (InputStream inputStream = store.getStream(filePath)) {
    // ...
  }
}
As explained, switching the storage type in the configuration does not automatically make existing code compatible with object storage. The application should not access files directly using system APIs. Instead, always use the provided storage APIs (MetaFiles/FileStoreFactory) to manage files. These APIs ensure compatibility with both file system and object storage backends. If you are using BIRT reports (or any other report templates) that were originally designed for the local file system, you may need to adapt them to use the storage abstraction layer instead of relying on local file paths. More details here.

Temporary file management was moved from com.axelor.meta.MetaFiles to com.axelor.file.temp.TempFiles and will use the new data.upload.temp-dir property:

// Was: Path tempFile = MetaFiles.createTempFile(prefix, suffix);
Path tempFile = TempFiles.createTempFile(prefix, suffix)

// Was: Path tempFile = Files.createTempDirectory(prefix);
Path tempFile = TempFiles.createTempDir(prefix);

For detailed information on configuring and using file storage, refer to File Storage Documentation.

data.export.dir removed

Another consequence of supporting multiple storage providers is the removal of export directory setting data.export.dir. Related ActionExport#getExportPath is also removed.

If you used data export dir, you need to migrate your code to create temporary files or directories, then download or attach the files somewhere for the users to access.

In the case of ActionResponse.setExportFile, it is no longer necessary to specify a file path relative to export directory. Now, the specified file path (either String, Path, or InputStream) will be copied to a dedicated temporary file for pending export.

// ❌ Old code using `data.export.dir`
String exportPath = AppSettings.get().getPath(AvailableAppSettings.DATA_EXPORT_DIR, DEFAULT_EXPORT_DIR);
Path file = Path.of(exportPath, name);
// Write to file
// (...)
// Set export file to be downloaded
response.setExportFile(name); // file name must be relative to export path
// ✅ New code using a file, either temporary or not
Path path = TempFiles.createTempFile();
// Write to file
// (...)
// Set export file to be downloaded
response.setExportFile(path, name); // path is copied to a dedicated temporary file, and optional name is used as download file name
// ✅ New code using an input stream
try (InputStream inputStream = /* any input stream source */) {
  // Set export file to be downloaded
  response.setExportFile(inputStream, name); // stream is read into a temporary file, and name is used as download file name
}

The removal of data.export.dir also affects action-export and i18n exports.

action-export file storage

Exported files generated by action-export actions are now created as temporary files, instead of being saved to data.export.dir directory.

It used to be possible to disable downloading and have exported files only accessible in data.export.dir directory. This is no longer the case: the export files have to be either downloaded or attached to the current record.

Because of the temporary file approach and the removal of data.export.dir, the output and download attributes on action-export action have now been removed:

<!-- ❌ Attributes 'output' and 'download' are not valid anymore -->
<action-export name="export.sale.order" output="${name}/${date}${time}" download="true">
  <export name="${name}.xml" template="data-export/export-sale-order.st" />
</action-export>
<!-- ✅ Export file will be directly downloaded by default -->
<action-export name="export.sale.order">
  <export name="${name}.xml" template="data-export/export-sale-order.st" />
</action-export>

You can choose to attach the export file to the current record using the new attachment attribute.

Refer to action-export documentation for details.

i18n exports

i18n exports are now downloaded as zip archive, instead of being created in data.export.dir directory.

New context propagation system

The new context propagation system is used to propagate context across threads and tasks submission. This is an all-in-one system combining TenantAware and AuditableRunner behaviors.

The main issue is to propagate in threads and tasks all information’s coming from HTTP requests (only available in servlet environments). Are concerned: the current user, tenant, locale, language, and base URL. Some can be determined by application properties, some logic can be dependent from the request. The system is now able to propagate all current context information in threads and tasks. It also allows manually defining them in (case of scheduling / batching).

ContextAwareRunnable or ContextAwareCallable can be used to propagate context information. This should be used over deprecated TenantAware and AuditableRunner.

Migrate from TenantAware :

// ❌ Old code
final ExecutorService executor = Executors.newFixedThreadPool(numWorkers);
executor.submit(new TenantAware(() -> {
  // work with database
})
.tenantId("some-tenant"));

// ✅ New code
final ExecutorService executor = Executors.newFixedThreadPool(numWorkers);
executor.submit(ContextAware.of().withTenantId("some-tenant")).build(
  () -> {
    // work with database
  }
);

Migrate from AuditableRunner :

// ❌ Old code
final AuditableRunner runner = Beans.get(AuditableRunner.class);
final Callable<Boolean> job = () -> {
  // process
};
runner.run(job);

// ✅ New code
final Callable<Boolean> job = () -> {
  // process
};
ContextAware.of().withTransaction(false).withUser(AuthUtils.getUser("admin")).build(job).call();
The AuditableRunner first tries to set current authenticated user as audit user but if not found it will set admin user as the audit user. Also, it doesn’t run the task inside a new transaction. With the new implementation, it no more sets the admin user as the audit user by default. In a non-servlet environment, you may have to provide the user you want the process runs with. Also, it opens a new translation by default. This can be disabled with .withTransaction(false).

See more details here

Scripting Policy

A scripting policy has been introduced to control which Java classes are accessible from scripts (Groovy, Expression Language, JavaScript). By default, access to most application classes is now restricted.

If your application uses scripts that call custom services or other classes, you will need to explicitly allow them. You can do this in two ways:

  • Add the @com.axelor.script.ScriptAllowed annotation to your service interfaces or classes.

  • Implement the com.axelor.script.ScriptPolicyConfigurator interface to programmatically define allowed/denied packages and classes.

Additionally, a script execution timeout has been introduced to prevent infinite loops, which defaults to 5 minutes. This can be configured globally or per script execution.

doInJpa helper has been removed, as it allowed unrestricted database access. To get a bean instance with policy check, __bean__(Class<T>) helper has been introduced to replace unrestricted com.axelor.inject.Beans.get(Class<T>) usage.

Some migration examples:

// ❌ Old code
doInJPA({ em -> em.find(Contact, id) }) // unrestricted database access
com.axelor.inject.Beans.get(SaleOrderService).validate(order) // unrestricted instanciation of all services
com.axelor.app.AppSettings.get().get('application.mode') != 'prod' // unrestricted access to all app settings

// ✅ New code
__repo__(Contact).find(id) // repositories are allowed by default
__bean__(SaleOrderService).validate(order) // checks scripting policy
__bean__(com.axelor.app.script.ScriptAppSettings).getApplicationMode() != 'prod' // you need to write your own script-allowed app settings helper

Note that the scripting policy is also applied to Groovy template engines by using the same class scanner and compiler configuration as for Groovy scripts. A side effect is that Groovy templates now use the same JPA class scanner. Unqualified classes are resolved and you can’t override them in context:

// ❌ Old code putting "Invoice" in Groovy template context
var templates = new GroovyTemplates();
templates.fromText("${Invoice.printingSettings?.addressPositionSelect}").make(Map.of("Invoice", invoice))).render()

// ✅ New code putting "invoice" (lowercase) in Groovy template context
// "Invoice" would resolve as the entity class, i.e. `com.axelor.apps.account.db.Invoice`
var templates = new GroovyTemplates();
templates.fromText("${invoice.printingSettings?.addressPositionSelect}").make(Map.of("invoice", invoice))).render()

For a detailed explanation of the new policy, default rules, and configuration options, please refer to the Scripting Policy Documentation.

BIRT 4.4.2 to BIRT 4.21.0

BIRT reporting engine 4.21.0 includes numerous improvements/changes. That means that many of your existing reports will likely have rendering changes or may even be broken and will need to be manually fixed.

IPDFRenderOption.PDF_HYPHENATION is renamed to IPDFRenderOption.PDF_WORDBREAK, but is enabled by default.

BIRT has a transitive dependency to Apache POI, upgraded from 3.9 to 5.4.x, that includes breaking changes.

Some examples of Apache POI change (non-exhaustive):

  • Cell.CELL_TYPE_<NUMERIC|STRING|…​> (int) → CellType.<NUMERIC|STRING|…​> (enum)

  • cell.setCellType(Cell.CELL_TYPE_BLANK)cell.setBlank()

  • font.setBoldweight(Font.BOLDWEIGHT_BOLD)font.setBold(true);

Also, the XML parser in BIRT has become stricter. Most notably, in your fontsConfig.xml, you need to omit the DOCTYPE declaration <!DOCTYPE font> to avoid validation against a non-existent DTD. Otherwise, your font configuration file will fail validation and will be ignored.

Before:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE font>
<font>
  <font-aliases>
    <mapping name="serif" font-family="DejaVu Serif" />
    <mapping name="sans-serif" font-family="DejaVu Sans" />
    <mapping name="monospace" font-family="DejaVu Sans Mono" />
  </font-aliases>
  <font-paths>
    <path path="C:/windows/fonts" />
    <path path="/usr/share/fonts/truetype" />
    <path path="/usr/share/fonts/TTF" />
  </font-paths>
</font>

After:

<?xml version="1.0" encoding="UTF-8"?>
<font>
  <font-aliases>
    <mapping name="serif" font-family="DejaVu Serif" />
    <mapping name="sans-serif" font-family="DejaVu Sans" />
    <mapping name="monospace" font-family="DejaVu Sans Mono" />
  </font-aliases>
  <font-paths>
    <path path="C:/windows/fonts" />
    <path path="/usr/share/fonts/truetype" />
    <path path="/usr/share/fonts/TTF" />
  </font-paths>
</font>

Groovy 3 to Groovy 4

Groovy 4 brings improvements in performance, Java compatibility, and language features. Beware of a few breaking changes mentioned in the Groovy 4 release notes.

Dropped name field on MetaFilter

The name field on MetaFilter has been removed, and different users can now create filters with the same title.

For the migration, you need to alter the table meta_filter with the following SQL statement:

ALTER TABLE meta_filter DROP COLUMN name;
CREATE INDEX IF NOT EXISTS meta_filter_filter_view_idx ON meta_filter(filter_view);

TagSelect widget deprecated in favor of Tags

TagSelect widget is deprecated in favor of Tags. It has the same behavior, it’s just a renaming of the widget name for readability and relevance. Old name can still be used, but we encourage adopting the new name as its usage will be removed in a next version.

Remove Junit4 support

JUnit 4 is no longer actively maintained, and the last maintenance release was JUnit 4.13.2 in February 2021. Support for JUnit Jupiter (JUnit 5) was introduced in v6.0. It is time to drop support for JUnit 4. Migrate your Junit tests to Junit5.

Remove Gradle support for database management

Gradle support for database management, ./gradlew database (init|update|…​), is removed in favor of new CLI.

Remove license header support

As part of the AxelorPlugin, we historically provided built-in support to manage license headers. This support has been removed and licenseFormat, licenseCheck and related tasks no longer exist. If license extension has been customized, it can be removed as no more used.

The plugin on which support was provided is no longer maintained. This is now application or module responsibility to provide it.

There are many Gradle plugins that can do the job. List is available on the Gradle plugin portal. The awesome Spotless formatting plugin provides support for adding license headers.

Deprecate Angular support

To easily migrate from previous versions, v7 was built with legacy Angular evaluations and templates support. This compatibility layer was intended as a temporary bridge to help move from Angular to React. Since React is now fully stable and supported, we encourage completing migration to React.

What this means:

  • Legacy Angular support will be removed in an upcoming major release.

  • Any remaining Angular-based templates or evaluations will stop working once this removal takes place.

  • To ensure a smooth upgrade path, please migrate your codebase to React as soon as possible.

Remove $record support for custom field expression like showIf/hideIf

Previously, for custom fields we used the $record prefix to access form fields (e.g., $record.name) in showIf/hideIf expressions. Now, form fields are directly accessible without the prefix, as $record support has been removed.

Also, scoped fields are not directly accessible. For example, if you have a custom field named test inside attrs and want to use it in an expression for another field within the attrs scope, you need to reference it as $attrs.test.

Before:

<field name="attrs.test" showIf="$record.id && $record.name" />
<field name="attrs.testFrom" showIf="test" /> // showIf attrs.test is set

Now:

<field name="attrs.test" showIf="id && name" />
<field name="attrs.testFrom" showIf="$attrs.test" />

Key changes:

  • Removed support for the $record prefix usage in custom field expression.

  • Expressions are now unified and work consistently across both form fields and custom fields.

Dropped deprecated features

Some features that were marked as deprecated in previous versions are now dropped :

  • Help widget css support is removed, use variant instead. See 7.3 migration guide

  • Remove deprecated ws/files/report/{link:.*} and ws/files/data-export/{fileName:.*} web services in favor of their equivalencies using query parameters : ws/files/report?link=<link> and ws/files/data-export?fileName=<fileName>.

  • Remove MetaPermissions#isCollectionReadable method.

  • Remove support of Font Awesome icons. Use either Material Symbols and Bootstrap Icons.

  • Remove top attribute in menuitem. Top menu support has been removed since 7.0. To ensure compatibility, the attribute was still present in xsd.

  • Remove record. prefix support in expressions/templates/EvalRefSelect. Added for backward compatibility, accessing fields now no longer need record. prefix. Update your js expressions, templates, and EvalRefSelect x-eval-* attributes according.

  • Remove method JPA#withTransaction(Supplier) in favor of JPA#callInTransaction(Supplier)