There is such a metric – character count – and Stack Exchange has an entire site dedicated to it: Programming Puzzles & Code Golf.
Golfing can be a lot of fun, but it’s not overly useful for commercial programming. Your typing speed is rarely the limiting factor on how quickly you produce working code, so having to type twice as many keystrokes to accomplish a task isn’t a problem.
Remember, source code is read more often than written. Saving keystrokes is a false economy because it often makes the code more difficult to read.
Character count of source code is not commonly used as a metric because it it not really useful. The reason is most of the characters in a mainstream language will be identifiers, so the choice of shorter or longer identifiers will dwarf other factors in measuring the conciseness of the code.
For example the C# method declaration
function b(). But with a different choice of identifiers if would be opposite.
Generally you want conciseness of code (reduce boilerplate and accidental complexity) but at the same time you want meaningful and descriptive identifier names. Since these two factors may have opposite effect on the character count, it is not really useful as a metric for neither quality and complexity.
Lines of code (LOC) is a problematic metric, but at least it is not affected by size of identifiers. Counting the number of tokens (rather than characters) would probably be more useful for your purpose.
The only context where measuring character count makes sense is if you are only concerned about physical size of the source code (e.g. for the purpose of deciding how much storage you need on the build server or something like that) in which case you would just use bytes, e.g. source code in KB or MB.