- Published on
Salesforce Big Objects Guide
- Authors
- Name
- Himanshu Varshney
Senior Salesforce Developer
Big Objects Best Practices
When working with Big Objects, it's important to adhere to best practices to ensure optimal performance and scalability:
Define Clear Use Cases: Identify specific use cases for Big Objects, focusing on large-scale data storage needs that don't require real-time access.
Data Archiving: Use Big Objects for archiving historical data that needs to be accessible but not frequently modified.
Performance Considerations: Design queries on Big Objects to be specific and narrow to minimize performance impacts, using indexed fields whenever possible.
Data Management: Regularly review and clean up unnecessary data to maintain system performance and reduce storage costs.
Integration Strategies: When integrating external systems, consider asynchronous data processing to reduce the load on Salesforce.
Define and Deploy Custom Big Objects
To create a Custom Big Object, you define its structure through Metadata API by specifying the fields and indexes. The process involves creating a metadata file that describes the Big Object, including its API name, fields, and indexes, and deploying this file to Salesforce using tools like the Salesforce CLI or Workbench.
Deploying and Retrieving Metadata with the Zip File
To deploy or retrieve Big Object metadata, you can use the Metadata API's deploy and retrieve operations with a ZIP file. This ZIP file contains the metadata descriptions of the Big Objects, including field definitions and index configurations. The Salesforce CLI or Workbench can be used to perform these operations.
Populate a Custom Big Object
Populating a Custom Big Object can be done through various methods:
Batch Apex: Suitable for large-scale data migrations or integrations.
External Data Integration Tools: Tools like Salesforce Data Loader or external ETL tools can be used for data import.
APIs: Salesforce APIs (e.g., Bulk API) can insert large volumes of data efficiently.
Populate a Custom Big Object with Apex
You can use Apex to programmatically insert data into a Big Object. This is particularly useful for complex data processing or when data originates within Salesforce. Apex code must be bulkified and optimized to handle large data volumes.
Delete Data in a Custom Big Object
Deleting data from Big Objects is not as straightforward as with standard or custom objects. You must use the deleteImmediate() method provided by Salesforce, which allows for the deletion of specific records based on defined criteria. This operation must be carefully managed due to its potential impact on system performance.
Big Objects Queueable Example
Queueable Apex can be used to perform asynchronous operations on Big Objects, such as inserting or processing large volumes of data. This approach helps in managing resource consumption and improving performance by leveraging asynchronous execution.
public class BigObjectQueueable implements Queueable {
public void execute(QueueableContext context) {
// Your logic to handle Big Object operations
}
}
Big Object Query Examples
Querying Big Objects requires the use of SOQL, with some limitations compared to standard SOQL queries. For example, you must query only indexed fields and cannot use wildcard characters in SELECT statements.
SELECT Field1__c, Field2__c FROM CustomBigObject__b WHERE IndexedField__c = 'Value'
View Big Object Data in Reports and Dashboards
Currently, directly accessing Big Object data in standard Salesforce reports and dashboards is not supported. To visualize this data, you may need to use custom solutions, such as Lightning Components or external reporting tools that can access Big Object data through APIs.
SOQL with Big Objects
SOQL queries on Big Objects have specific considerations:
Indexed Fields: Queries must filter on an indexed field and can include additional filters on non-indexed fields.
Limitations: Functions like ORDER BY, GROUP BY, and LIKE are not supported.
Aggregate Queries: Aggregate queries are supported but with limitations, such as needing to include an indexed field in the GROUP BY clause.
By understanding and utilizing these aspects of Big Objects, organizations can effectively manage large volumes of data within Salesforce, ensuring data accessibility without compromising system performance.