Skip to content

Performance optimizations #16

@typerandom

Description

@typerandom

Up until now focus have been on API design and features and not on performance/memory usage. So as you can see in the benchmark below, the library is currently outperformed by it's competitors (Edit: Worth nothing that this is no longer the case. See latest bench in comments.). Though this should not be that hard to fix.

Name                                            Runs        Time spent      Memory
-----------------------------------------------------------------------------------------------------------
BenchmarkNativeMin                              20000000    111 ns/op       34 B/op        2 allocs/op
BenchmarkValidatorMin                           200000      11590 ns/op     3100 B/op      88 allocs/op
BenchmarkCompetitorGoValidatorMin               200000      8998 ns/op      2100 B/op      66 allocs/op
BenchmarkCompetitorAsaskevichGoValidatorMin     5000000     672 ns/op       32 B/op        2 allocs/op

Things I know that are unnecessarily expensive:

  • Parsing structure tags - Could easily be cached. (implemented)
  • Walking structure fields - At this moment the whole structure graph is traversed. Could be minimized to fields with the validate tag. Of course, worsens user experience.
  • Normalization - Normalizing values to their 64-bit counterparts and dereferencing values/passing copies and turning nil values into their "zero" structures is expensive. Should do some benchmarking to see exactly how expensive it is.

Another optimization that could be done is to build a validation graph for each structure type so that one would not have to walk the whole struct graph all of the time.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions