For this week's activity for the Topics in Open Source course, we are tasked to network and collaborate with the open source community for our first release of our assignment.
We are required to test and review each other's code to identify any bugs, missing required features, or issues in the GitHub repository. With that being said, any issue we find are to be filed as GitHub issues for the developer to resolve.
Async approach to code reviews
I personally prefer the asynchronous approach to code reviews. It gives more of the open source vibe as opposed to doing synchronous code reviews. This allows the open source community to do tests and code reviews on my project anytime, file any issues found as GitHub issues, and then I would resolve them in my own time.
Testing and reviewing someone else's code
Reviewing someone else's code is always a good way to learn. This allows us to grow as developers as we see how others approach problems differently. Whether they are doing it right or wrong, it is still a good learning experience.
I tested and reviewed my partner's tool which is called optimizeit
. It is an amazing command-line application that helps you easily optimize code in the file that is provided.
I did run into some problems however, both in a good way and in a bad way I could say. While I did end up finding issues, I found it challenging to look for issues in my partner's code. My partner deserves the credit for writing clean working code. With that said, there's got to be something that can be improved but I just can't find any. So with my code review to my partner's project, I also take this as a feedback to myself on what areas I can improve on.
After testing and reviewing optimizeit, these were the 4 issues I found both in code and in the repository:
Issue #1
When looking at open source projects, the next thing I usually look for after the project name is the project the description, and I noticed that my partner's repository did not have one.
Even though the project includes a description in the README documentation, I find it important to also have the description in the repo so that the open source community can easily see a quick description of the project without having to go inside the repository and look for the description in the README documentation.
Issue #2
Going through the project's README documentation, I noticed that there was no section for any dependencies or packages used by the project. I think it is a good form of transparency to let users know about the packages that the project relies on.
This also allows the developers to understand the packages used by providing a link to the package's documentation and makes the setup easier.
Issue #3
Continuing through the documentation, I noticed a simple typo in one of the usage examples for a flag option, where the command was supposed to be -md
instead of --md
.
Issue #4
One of the main issues I found that was actually breaking the tool. I am also using Groq like my lab partner, however we initialized our Groq client differently. I only initialized my client with an api key, while my partner initialized with an api key and the baseURL https://api.openai.com/v1
.
Running optimizeit does not work and it was giving a 404 error stating that the POST request /v1/openai/v1/chat/completions
is not found.
Looking at Groq's documentation for OpenAI compatibility, they provided a base url: https://api.groq.com/openai/v1
which apparently also doesn't work.
Whenever I ran optimizeit and receive the 404 error, I noticed that it would always add openai/v1/chat/completions
at the end of the provided baseURL. So I figured that maybe removing /v1 or /openai/v1 from the baseURL might do the trick, and it did. Removing the baseURL completely also solves the issue.
My own code being reviewed
When I started coding the assignment, my first main goal was to get it to work. Our assignment was about leveraging LLMs to transform files passed into the argument and it is my first time working with LLMs myself. We can come up with any ideas that could achieve this requirement, and I chose to do one of the provided examples which is to generate a README file based on the provided file, which I call genereadme
.
My partner caught some bugs on my project where most of them are unhandled errors, and some for improvements:
Issue #1
When running the code, it initially just generates a README.md file and saves it to the directory of the file that was passed. This meant that if the user were to pass a file that was also in the root directory of the project, the project's README.md will be overwritten with the generated version.
I wrote the code with the assumption that the user would pass files from a different directory, so this was a good catch!
Issue #2
An uncaught error when running the tool with an invalid file. While the error itself mentions "No such file or directory", it is still a huge block of error message that doesn't really provide full context.
Issue #3
Another uncaught error when running the tool without providing a valid key.
At the time of my partner's testing, the -a or --api-key flag which accepts an apiKey through the command line argument was still not implemented. So the only way to provide an api then was through creating a .env file. So my partner ran the tool and produced another uncaught error stating that there was an error initializing the Groq client due to missing api key.
Issue #4
A missing required feature and a simple nitpick by my partner.
The requirement for the version flag is that it should be -v or --version, and my project had it with -V. I am using `commanderjs for command line arguments, and the package has the version flag set to -V by default. The requirement also mentions that the flag should also display the tool's name along with the current version, which my project only displayed the current version at the time.
Issue #5
When I chose to make this tool, I immediately thought of making a README generator for ease of documenting code and project. This reduces development time by having a tool available to easily document a collection of files. However, when I started coding, and as I have mentioned it is my first time working with LLMs myself, I made a prompt on how the README file should be formatted and how the code contents of the file should be processed.
My partner broke my tool by providing a text file with a simple content:
Once upon a time, there was a boy called Romeo. One day, Romeo met Juliet!
and it started explaining the novel even if my prompt was about properly explaining and documenting code. So I took my partner's tips on how to improve my prompt engineering.
Conclusion
In the end, I was able to resolve all of my issues due to my partner's detailed issues with suggested fixes on the bugs that was caught. This lab was a good learning experience as it actually gave us a first hand experience on the process of how the open source community collaborates together.
Through this lab, I learned a number of important things:
- There is always something that can be improved.
- It is good to have someone else review your code because there is always something that you can miss, no matter how small of an issue it is.
- Filing a very detailed issue is crucial as it is very helpful for the developer to understand what is wrong, when and how a bug happens, and what can possibly be done to fix the issue.