• drre@feddit.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    does anyone know whether these results were obtained while taking the size of the dictionary into account?

    • abhi9u@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      Do you mean the number of tokens in the LLM’s tokenizer, or the dictionary size of the compression algorithm?

      The vocab size of the pretrained models is not mentioned anywhere in the paper. Although, they did conduct an experiment where they measured compression performance while using tokenizers of different vocabulary sizes.

      If you meant the dictionary size of the compression algorithm, then there was no dictionary because they only used arithmetic coding to do the compression which doesn’t use dictionaries.

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      It looks like they did it both ways (“raw rate” vs “adjusted rate”):

      In the case of the adjusted compression rate, the model’s size is also added to the compressed size, i.e., it becomes (compressed size + number of model parameters) / raw size. This metric allows us to see the impact of model parameters on the compression performance. A very large model might be able to compress the data better compared to a smaller model, but when its size is taken into account, the smaller model might be doing better. This metric allows us to see that.

      • abhi9u@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Yes. They also mention that using such large models for compression is not practical because their size thwarts any amount of data you might want to compress. But this result gives a good picture into how generalized such large models are, and how well they are able to predict the next tokens for image/audio data at a high accuracy.