If you don't have a Medium subscription, don't worry. I always post links on my LinkedIn for free reading
Every developer is familiar with the moment when you open the browser console or terminal and see a jumble of nameless messages, every time asking questions:
- Which module fell?
- Where did this log come from?
- Is this an error or just information?
At a critical moment, debugging turns into an archaeological excavation.
I created @dolgikh-maks/logger — a minimalistic utility without third-party dependencies that solves this problem elegantly and without frills.
And I want to show real-life scenarios to demonstrate the usefulness of this tool.
Philosophy of the instrument
When creating @dolgikh-maks/logger, I was guided by the principles:
- Minimalism. No configurations longer than 100 lines
- Predictability. Consistent behavior everywhere
- Performance. Zero overhead when debug is off
- Developer Experience. Easy-to-read logs = faster problem detection
- ESLint compliance. Eliminate
// eslint-disable no-consolecomments from your code.
This is the result of analyzing real pain points: confusing logic in microservices, inability to debug in production, and lack of a unified style in team development.
Key advantages of the approach
1. Context isolation
Each module has its own scope, which eliminates confusion when components are running in parallel.
2. Visual hierarchy
Colored indicators (🔵 info, 🟢 success, 🟡 warning, 🔴 error) allow you to instantly assess the situation.
3. Flexible debugging
The setDebug() flag enables detailed logs only when needed, without cluttering production.
4. Timers out of the box
The loading() method automatically measures the execution time of asynchronous operations in the browser, which is critical for optimization.
5. Cross-platform compatibility
A unified API for browsers and Node.js. The terminal has an animated spinner for long operations.
Problem #1: Chaos in front-end application API requests
Situation
You are developing an SPA with many asynchronous requests. Users are complaining about strange behavior, and you need to understand what is happening with the API.
Before: Random logging
Try to understand what happened here. It is impossible to trace the sequence, source, or context of the error.
**Console output:**
Fetched profile
Service unavailable
UpdatingObviously, this console output format is extremely uninformative. And yes, you can add a call scope or trace for each output, but do you really want to do that every time?
After: A structured approach
What if you get a ready-made tool that can immediately show the context of a particular code call?
All you have to do is create the appropriate logger for each domain area. Use the loading() function that tracks your promise.
import createLogger from '@dolgikh-maks/logger';
// User session
const userLogger = createLogger({ scope: 'User' });
const fetchUser = async () => {
const promise = fetch('/api/user');
return userLogger.loading('Fetching profile', promise);
};
// Orders
const orderLogger = createLogger({ scope: 'Orders' });
const fetchOrders = async () => {
const promise = fetch('/api/orders')
.then(res => {
if (!res.ok) throw new Error('Service unavailable');
return res.json();
});
return orderLogger.loading('Loading history', promise);
};
// Shopping cart
const cartLogger = createLogger({ scope: 'Cart' });
const updateCart = async (items) => {
const promise = fetch('/api/cart', {
method: 'POST',
body: JSON.stringify(items)
});
return cartLogger.loading('Updating', promise);
};
**Console output:**
🟢 [User]: Fetching profile (234ms)
🔴 [Orders]: Loading history
Error: Service unavailable
🟢 [Cart]: Updating (156ms)Result: It is immediately clear that the ordering service is unavailable, while authorization and the shopping cart are working normally. Timings can be used to auto-track performance.
Problem #2: Debugging in production without log spam
Situation
Only critical logs are needed in production, but when a bug occurs, detailed information is required for reproduction.
Before: All or nothing, or conditions
- Option 1: manually comment out all debug logs and try to understand the sequence of actions locally
// console.log('Validation step 1');
// console.log('Validation step 2');
// console.log('Parsing data...');- Option 2: Leave everything as it is and drown in logs
console.log('Validation step 1');
console.log('Validation step 2');
console.log('Parsing data...');
console.log('Checking permissions');- Option 3: Conditional calls
if(debug){
console.log('Validation step 1')
}None of the three options is the optimal solution. Such solutions quickly accumulate comments or are forgotten altogether, obscuring the reason for their inclusion.
After: Controlled debug mode
Instead of marking each location for potential debugging, use a single control point setDebug() for extended console output.
// At the root of the file or application
const userLogger = createLogger({ scope: 'User' });
// activating debug mode from browser
const isDebugMode = localStorage.getItem('debug') === 'true';
userLogger.setDebug(isDebugMode);
// Now these logs only appear when necessary.
userLogger.debug('Validation step 1: checking required fields');
userLogger.debug('Validation step 2: type conversion');
userLogger.debug('Parsing data structure');
userLogger.debug('Checking user permissions');
// But important messages are always visible
userLogger.error('Failed to process user data');
userLogger.warn('Using deprecated API endpoint');We get the output in the console
** Production (debug=false):**
🔴 [User]: Failed to process user data
🟡 [User]: Using deprecated API endpoint
------
** Production (debug=true):**
⚪ [User]: Validation step 1: checking required fields
⚪ [User]: Validation step 2: type conversion
⚪ [User]: Parsing data structure
⚪ [User]: Checking user permissions
🔴 [User]: Failed to process user data
🟡 [User]: Using deprecated API endpointResult: Our console remains informative while maintaining a consistent approach to labeling messages for debugging. If a bug is detected, we can enable extended mode via localStorage and see what is happening inside without trying to reproduce the bug in the local environment.
Problem #3: Node.js service with multiple modules
Situation
You are developing a backend service that includes: database, cache, external APIs, and message queues. Everything runs in parallel, and the logs get mixed up into an unreadable mess.
Before: Impossible to trace the source
You open Lens or another tool to view the log status and see this
**Console output:**
Connecting to PostgreSQL
Database connection
Checking cache for user:42
Cache hit
Processing job #128
Request timeout
Job #128 failed: External API unavailableWhat crashed — the database or the external API? What is the sequence of calls, and what belongs to what?
After: Each module is isolated
Following the same pattern as the first example, we can create a logger for each code zone and view the actions performed with it.
// Connecting to the database
const dbLogger = createLogger({ scope: 'Database' });
const connectDB = async () => {
dbLogger.info('Connecting to PostgreSQL');
const promise = pg.connect();
await dbLogger.loading('Database connection', promise);
};
// Working with the cache
const cacheLogger = createLogger({ scope: 'Redis' });
const getCachedUser = async (id) => {
cacheLogger.debug(`Checking cache for user:${id}`);
const cached = await redis.get(`user:${id}`);
if (cached) {
cacheLogger.success('Cache hit');
return cached;
}
cacheLogger.warn('Cache miss, fetching from DB');
return null;
};
// Queue processing
const queueLogger = createLogger({ scope: 'Queue' });
const processJob = async (job) => {
queueLogger.info(`Processing job #${job.id}`);
try {
const result = await job.execute();
queueLogger.success(`Job #${job.id} completed`);
return result;
} catch (error) {
queueLogger.error(`Job #${job.id} failed: ${error.message}`);
throw error;
}
};
// External API
const apiLogger = createLogger({ scope: 'ExternalAPI' });
const fetchExternalData = async () => {
const promise = axios.get('https://api.example.com/data', {
timeout: 5000
});
return apiLogger.loading('Fetching external data', promise);
};
**Console output:**
🔵 [Database]: Connecting to PostgreSQL
🔄 [Database] 🐳
🔄 [Database] 🐳
🟢 [Database]: Database connection
🔵 [Redis]: Checking cache for user:42
🟢 [Redis]: Cache hit
🔵 [Queue]: Processing job #128
🔄 [ExternalAPI] 🐳
🔄 [ExternalAPI] 🐳
🔄 [ExternalAPI] 🐳
🔴 [ExternalAPI]: Fetching external data
Error: Request timeout
🔴 [Queue]: Job #128 failed: External API unavailableIn reality, console progress is shown dynamically in a single line — not as multiple entries like in the example
Result: It is crystal clear that the external API is not responding, which caused the task to drop in the queue, but the database and cache are working steadily.
Problem #4: Developing an NPM package
Situation
You are creating a library for publication. Users will integrate it into their projects, and they need transparency in how it works.
Before: Silent library or spam
Most developers make their libraries uninformative (black box) for other developers.
// file-uploader.js
export async function uploadFile(file) {
if (!isValidType(file)) {
throw new Error('Unsupported file type');
}
const chunks = chunkFile(file);
return Promise.all(
chunks.map(chunk => uploadChunk(chunk))
);
}Because of this omission, manual debugging is required to identify the causes of errors.
Another option is when everything — both necessary and unnecessary is displayed in the console. You cannot control the output. The flags --debug or --verbose have no effect, as they may simply not have been added by the developer.
// file-uploader.js
export async function uploadFile(file) {
console.log(`Uploading ${file.name} (${formatBytes(file.size)})`);
console.log('Validating file type');
if (!isValidType(file)) {
console.error('Invalid file type');
throw new Error('Unsupported file type');
}
console.log('Preparing multipart upload');
const chunks = chunkFile(file);
return Promise.all(
chunks.map(chunk => uploadChunk(chunk))
).finally(() => console.log('File is uploaded');)
}After: Professional conduct
Based on the previous examples, we can also make the sequence of steps more informative and manage this informativeness on demand with --debug or --verbose
// file-uploader.js
import createLogger from '@dolgikh-maks/logger';
const logger = createLogger({ scope: 'FileUploader' });
// If the library does not work through the CLI, but as a simple function, then you can explicitly pass the debug flag
export async function uploadFile(file, options = { debug: false }) {
logger.setDebug(options.debug);
logger.info(`Uploading ${file.name} (${formatBytes(file.size)})`);
logger.debug('Validating file type');
if (!isValidType(file)) {
logger.error('Invalid file type');
throw new Error('Unsupported file type');
}
logger.debug('Preparing multipart upload');
const chunks = chunkFile(file);
const uploadPromise = Promise.all(
chunks.map(chunk => uploadChunk(chunk))
);
return logger.loading(`Upload file`, uploadPromise)
}
**Output (normal mode):**
🔵 [FileUploader]: Uploading document.pdf (2.4 MB)
🟢 [FileUploader]: Upload file (3421ms)
**Output (debug mode):**
🔵 [FileUploader]: Uploading document.pdf (2.4 MB)
⚪ [FileUploader]: Validating file type
⚪ [FileUploader]: Preparing multipart upload
🟢 [FileUploader]: Upload file (3421ms)Conclusion
Good logging is more than just console.log. It is a thinking tool that helps you understand system behavior in real time. Structured logs turn debugging from a pain into a systematic process.
@dolgikh-maks/logger solves this problem in the simplest way possible: install it, create an instance with scope, and the code starts speaking to you in a language you understand. Try it, and your console will no longer look like a dumping ground for nameless messages
npm install @dolgikh-maks/loggerIs this full functionality? — No
Of course, I am considering customizing the current solution, from displaying the log time in the console to configuring indicators for each type of output.
If this idea is developed further, I would like to expand it to include more developers.
I have created a separate list where I will add articles as they are released for easy reading
My content is often saved to favourites, but unfortunately, Medium's algorithms also look at the number of claps a story has.
If my content was useful, not only save it but also give your "clap" as well. This helps promote the content