Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • beat.core beat.core
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 14
    • Issues 14
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 3
    • Merge requests 3
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • beatbeat
  • beat.corebeat.core
  • Issues
  • #41
Closed
Open
Issue created Jul 31, 2015 by Laurent EL SHAFEY@laurent.el-shafey

[io] Optimization of I/O when using pipes

With the introduction of communication via pipes between the user process and the daemon, it may happen that a chunk of data is decoded from a baseformat and then re-encoded (into a baseformat) to send it through pipe. This is the case, when next() is called: a chunk of data is then read using a CachedDataSource (that will decode from a baseformat) and then sent through a pipe (and hence will be re-encoded into a baseformat). This is of course suboptimal and may be fixed by specializing the CachedDataSource.

Assignee
Assign to
Time tracking