Skip to content

Track start offset and whitespace start offset for lexer#690

Open
norcalli wants to merge 1 commit intoterralang:masterfrom
norcalli:master
Open

Track start offset and whitespace start offset for lexer#690
norcalli wants to merge 1 commit intoterralang:masterfrom
norcalli:master

Conversation

@norcalli
Copy link
Copy Markdown

This makes it possible to get more accurate information for extracting out
tokens from the source

I'm using this to do dbg and expect style testing where I can modify
source in place with snapshot tests

This makes it possible to get more accurate information for extracting out
tokens from the source
@norcalli
Copy link
Copy Markdown
Author

For instance

return {
  name = "dbg";
  entrypoints = { "dbg" };
  keywords = {  };
  expression = function(self, lex)
    lex:expect "dbg"
    local start_token = lex:cur()
    local expfn = lex:luaexpr()
    local end_token = lex:cur()
    local filename = lex.source
    -- terralib.printraw{"dbg", start_token, end_token}
    return function(env_fn)
      local env = env_fn()
      local data = assert(io.open(filename)):read"*a"
      local start = start_token.start_offset + 1
      local finish = end_token.ws_start_offset
      io.stderr:write(("WS:%q\n"):format(data:sub(end_token.ws_start_offset+1, end_token.start_offset)))
      io.stderr:write(("CODE:%d:%d: %q\n"):format(start, finish, data:sub(start, finish)))
      return expfn(env)
    end
  end;
}

without this change it would be impossible to get the end of the token since it would be defined by the next token's offset, which happens after skipping whitespace and comments.

@elliottslaughter
Copy link
Copy Markdown
Member

Is there an easy way to add a short test case to sanity check this is doing what you expect?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants