Mongoose, a popular ODM (Object Data Modeling) library for MongoDB in Node.js, provides powerful querying capabilities to work with complex document structures. In this article, we'll explore some advanced querying features in Mongoose, including $elemMatch
, aggregation pipelines, and bulkWrite
operations. These techniques are essential for building efficient and scalable applications.
1. $elemMatch
– Filtering Arrays of Subdocuments
Use Case
When documents contain arrays of objects (like orders), you might want to query based on specific key-value pairs within that array.
Example Schema
const CustomerSchema = new mongoose.Schema({
name: String,
is_active: Boolean,
orders: [
{
product: String,
quantity: Number,
status: String,
},
],
});
Query Example
const pendingOrders = await Customer.find({
orders: {
$elemMatch: {
product: "Laptop",
status: "pending",
},
},
});
Explanation
$elemMatch
ensures that at least one element in theorders
array satisfies both conditions:product = 'Laptop'
andstatus = 'pending'
.- Without
$elemMatch
, MongoDB might incorrectly match documents where one order hasproduct: 'Laptop'
and another hasstatus: 'pending'
.
2. Aggregation Pipeline – Extracting Unique Values
Use Case
Extract all unique cities where customers have placed shipped orders.
Aggregation Query
const customerCitiesCursor = Customer.aggregate([
{ $match: { is_active: true } },
{ $unwind: "$orders" },
{ $match: { "orders.status": { $regex: /^shipped$/i } } },
{ $group: { _id: null, cities: { $addToSet: "$orders.city" } } },
]).cursor();
Step-by-Step Explanation
$match
: Filters documents to include only active customers.$unwind
: Deconstructs theorders
array so that each element becomes its own document.$match
: Filters orders where the status is'shipped'
(case-insensitive).$group
: Aggregates all matching cities into an array, using$addToSet
to avoid duplicates..cursor()
: Returns a cursor for memory-efficient streaming, which is useful when dealing with large datasets.
Result
A list of distinct cities where shipped orders exist.
3. bulkWrite
– Batch Write Operations
Use Case
Perform multiple write operations (insert, update, delete) in a single database call to improve performance.
Sample Usage
const bulkOps = [
{
updateOne: {
filter: { name: "John Doe" },
update: { $set: { is_active: false } },
},
},
{
deleteOne: {
filter: { name: "Jane Smith" },
},
},
{
insertOne: {
document: { name: "New Customer", is_active: true, orders: [] },
},
},
];
const result = await Customer.bulkWrite(bulkOps);
Explanation
updateOne
: Updates a single document matching the filter.deleteOne
: Deletes one document.insertOne
: Adds a new document.
Result Object
The result from bulkWrite
contains details like:
nInserted
,nMatched
,nModified
,nDeleted
upsertedIds
,nUpserted
This allows you to monitor the impact of your batch operations.
Summary
Advanced Mongoose queries like $elemMatch
, aggregation pipelines, and bulkWrite
enable you to:
- Perform precise filtering within arrays
- Extract and manipulate data efficiently
- Optimize performance with batched write operations
These techniques are vital when working with complex MongoDB data structures in production-grade Node.js applications.
Bonus Tip
If you regularly use these patterns, consider wrapping them in reusable repository or service classes for cleaner and more maintainable code — especially in a NestJS architecture.