Skip to content

Example in README cannot read the last entry in the append only log #1

@Exadra37

Description

@Exadra37

First, thanks for this project :)

While playing with i found that I always fail to read the last entry in the log.

Running this code:

dir = Path.join(".examples", EventLog.Utils.random_string(16))

{:ok, log} = EventLog.start_link(dir)
{:ok, _} = EventLog.create_stream(log, "my_stream")

EventLog.append(log, "my_stream", "foo0")
EventLog.append(log, "my_stream", "foo1")
EventLog.append(log, "my_stream", "foo2")

# :timer.sleep(2000)

# read from beginning
{:ok, reader} = EventLog.get_reader(log, "my_stream")

EventLog.Reader.get_next(reader) |> IO.inspect
EventLog.Reader.get_next(reader) |> IO.inspect
EventLog.Reader.get_next(reader) |> IO.inspect

EventLog.Reader.get_next(reader) |> IO.inspect # no more entries

# read from offset=2
{:ok, reader} = EventLog.get_reader(log, "my_stream", 2)
EventLog.Reader.get_next(reader)

# read all entries
EventLog.Reader.get_batch(reader, 0, 3) |> IO.inspect

EventLog.close(log)

outputs this:

20:47:45.503 [info]  init, opts: [max_seg_size: 1073741824]

20:47:45.504 [debug] dir: .examples/V6KaYgoOKPEKxv1n/my_stream, opts: [max_seg_size: 1073741824]

20:47:45.511 [debug] {".examples/V6KaYgoOKPEKxv1n/my_stream/00000000000000000000.seg", -1, 0}
{:ok, {0, "foo0", 1608497265517, 0, 795769153}}
{:ok, {1, "foo1", 1608497265517, 0, 1483295191}}
{:ok, :eof}
{:ok, :eof}
[
  {0, "foo0", 1608497265517, 0, 795769153},
  {1, "foo1", 1608497265517, 0, 1483295191}
]

As we can see the foo2 doesn't exist in the append only log, instead we find the end of the file.

Now if we sleep for 2 seconds, :timer.sleep(2000)(line 10) we can read it:

20:51:17.327 [info]  init, opts: [max_seg_size: 1073741824]

20:51:17.329 [debug] dir: .examples/DJ5MMl_kCwfeWddQ/my_stream, opts: [max_seg_size: 1073741824]

20:51:17.336 [debug] {".examples/DJ5MMl_kCwfeWddQ/my_stream/00000000000000000000.seg", -1, 0}
{:ok, {0, "foo0", 1608497477342, 0, 795769153}}
{:ok, {1, "foo1", 1608497477342, 0, 1483295191}}
{:ok, {2, "foo2", 1608497477345, 0, 3244300397}}
{:ok, :eof}
[
  {0, "foo0", 1608497477342, 0, 795769153},
  {1, "foo1", 1608497477342, 0, 1483295191},
  {2, "foo2", 1608497477345, 0, 3244300397}
]

I think this to to the Linux write cache being enabled by default in the OS, therefore the writes are not flushed from the cache to the disk immediately.

Disabling the Linux write cache could resolve this as per this article:

Not all system's belong to the same "turn-on write-back caching" recommendation group as write-back caching caries a risk of data loss in the event such as power failure etc. In the event of power failure, data residing in the hard drive's cache do not get a chance to be stored and a lost. This fact is especially important for database system. In order to disable write-back caching set write-caching to 0:

# hdparm -W0 /dev/sda

/dev/sda:
setting drive write-caching to 0 (off)
write-caching =  0 (off)

I will later test this in a server, not in my Laptop and will update the issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions