A little bit of failure

So yesterday, I failed.

I failed to integrate Django and Scrapy under the same project.

I am using a very opinionated boilerplate.

What I like about it is that it installs Tailwind (with node) and makes it work with zero config, it includes auth, celery and many other things that take time to configure and implement.

What I don't know if I like, is that it follows a particular convention for how models and views are used. Instead of following the Django way of creating "apps" and then a models.py to include all models, the boilerplate suggests creating just one app, and multiple model files (book.py, author.py, category.py) and a __init__ file to link them all together.

This is definitely something new for me, something I must get used to, and in a way, a downside to using this boilerplate. But since I have not found any other free boilerplate that has what I need...

So, yesterday, was all to naught. I couldn't get Scrapy and Django to work together. Mind you, I was trying to get the scraper I built for jobsinenglish.dk, which is a bit complicated to work.

Today I will follow a different approach. I will create a sample Django project with the boilerplate, and integrate the most basic Scrapy project (the tutorial from the docs).

Let's see how I do.