My time working with Episerver CMS has come to an end. I'm moving on. I wanted to write a final part about this makeshift API. I did do more work on it since the last post, spurred on by requirements at work. This is it from me about epi.
In part 1, I briefly explained how I came to work on writing an API for an Episerver CMS by exploiting the front-end framework on the web interface. As a developer, with a finite life-span, I jumped straight to the part that I needed for the job. I had intended to go back to the beginning and map all the functionality. Unfortunately I haven't had very much time to spend on this. Together with an amount of uncertainty of my future with Episerver, I'm shelving this. My day job is with Episerver and it's always possible that this API will get reignited by some requirements later.
I hope this has been helpful to someone.
At the end of last year (between Christmas and New Years), I had the task of auditing all the pages in an installation of Episerver. The problem was that the Service API wasn't installed and the possibility of deploying code to the server was pretty slim. This got me thinking about the feasible solutions.
Do you need to take screenshots of numerous web pages? Yes? Then this article is for you. You can take screenshots of web pages programmatically using the WebBrowser class (System.Windows.Forms). Let's get straight to the code.
In this article I'm going to show the usage of the HttpWebRequest and HttpWebResponse classes and how you can use it to interact with websites. I'm going to be focusing on retrieving text responses (or the source code of a web pages) and not binary files. I'm also not going to be mentioning WebClient or HttpClient, they may be an article for another time. At the end of the article I'm sharing a helper class.