Force openlayers to not use browser cache for tiles refresh

Force openlayers to not use browser cache for tiles refresh

I use OpenLayers.Layer.XYZ to display tiles from TileStache server without using the cache option of the server. However, I notice that the tiles stay in cache (probably the browser cache) until the whole page is refreshed with Ctrl-F5.

If I want to redraw the XYZ layer, it does not works because the map use the browser cache. Is there a way to force the map to not use that cache? So could refresh the layer by asking fresh tiles from the server.

var map = new OpenLayers.Map('map', { projection: new OpenLayers.Projection("EPSG:3857"), numZoomLevels: 20 }); var tiledLayer = new OpenLayers.Layer.XYZ('TMS', "{{ tmsURL }}1.0/layer/{{ }}/${z}/${x}/${y}.png">

Jakub Kania was correct in his comment that the date/time have to be added to the url to make it different from the url of tiles in the cache. You have to subclass OpenLayers.Layer.XYZ for that:

OpenLayers.Layer.CustomXYZ = OpenLayers.Class(OpenLayers.Layer.XYZ, { getURL: function () { var url = OpenLayers.Layer.XYZ.prototype.getURL.apply(this, arguments); return url + '?time='+ new Date().getTime(); } }); var tiledLayer = new OpenLayers.Layer.CustomXYZ('TMS', "{{ tmsURL }}1.0/layer/{{ }}/${z}/${x}/${y}.png">ShareImprove this answeranswered Aug 25 '14 at 14:58Below the RadarBelow the Radar3,3041 gold badge26 silver badges53 bronze badges 

Force client-side browser CSS/JS cache reload

How can I force the client browser to re-fetch JS / CSS files?

I've noticed that when I add to existing .CSS files, the updates are only applied if the user refreshes the page. (In other words, simply navigating to the page will not work).

I've tried flushing the caches (including JS/CSS) as well as rebuilding my minified CSS/JS files. Unfortunately I'm still seeing this behavior in Chrome, Safari & Firefox. (Internet Explorer funnily enough behaved quite well).

I looked at this question on SO:

I was wondering if Magento has any built in way of doing this without the necessary addition of source changes / the installation of third party software?

Can you steal my fishing coordinates?! (Info in comments)

Yes, easily. They are loaded from a JSON file. You can simply use the browser's network inspector to see all the requests and pick it out there.

To safely do this you would either need to randomly misplace each location or use a raster representation of low resolution (best also with random offsets if you want to be super safe).

I explored using raster tile layers to display the points, but not being able to edit features in the cloud, and having to recreate the entire tile cache after each edit was a dealbreaker.

Is there a way to dynamically generate raster tiles on the fly? Can I host a private map in the cloud that I can edit by logging in, but that only gives users access to a difficult-to-pilfer raster representation?

Hey, do you mind pointing me in the right direction of how you extracted these and loaded them into QGIS?

This is the solution in my opinion.

And there you have it. Thanks!

I'm testing ESRI Online as a way to display fishing spots on a nautical chart without giving away their exact position. I've created an embeddable map that I think will keep the coordinates to my honey holes safe, but I'm not sure.

I've tested this map in an incognito window, and so far I haven't found a way to extract exact coordinates. Does someone with advanced GIS skills want to take a stab at stealing my spots in this test map?

A couple possible avenues:

Somehow clicking through the embed map and opening it in the ArcGIS online map viewer.

Each point has its coordinates stored as a text attribute in DMS format. I've hidden those fields in the popup info boxes, but if a user could force the attribute table to open, theyɽ have everything.

Downloading the underlying feature layer and opening it in desktop GIS software.

Server cache still using source files

I have a few imagery tile caches created based on imagery in either MrSid or Grid format in ArcGIS Server. I'm trying to clear out some hd space and wanted to remove the copy of the imagery that is on that server, or at least move it. However, despite the cache being created, and I assume being all that is used to display the imagery, Server will not allow me to remove the imagery files that were used to create the cache. If I stop the server service and remove the files, no imagery appears anymore in my applications.

1. Why would a cache need the original imagery data when working with a cache? Is there some way to change this?

2. Where is the reference to the original imagery located so I can change it? I would like to move the data to another drive but not have to recreate the caches since it cripples the server during that process.

by RobertScheitlin __GISP

By looking at your config.xml you are not using any layer types but dynamic so your cached layers are not actually being used at all. You are bypassing the caching because you have them as type="dynamic".

Just in case this helps others. The reason the source data was still being used was the cache was set for "update automatically" instead of manually in the settings. Since it is imagery and won't change, I simply changed to manually and I was then able to move the imagery.

by RebeccaStrauch_ _GISP

That's a good tip to remember.

just as a side, I tend to always check "manual" since it's much cleaner, in my opinion, if you can control when and where (extent) the cache is being created. In pre-10.2.x I thought this was even more of a necessary need, especially since we have a large (ocean) waterbody within our full extent that never really needed to be cached at detail (data sources weren't of the quality to make a difference). That's just another reason that having "manual" checked might help for others.

..and one more note. in previous versions (can't remember which one), if you created a new service and set up the cache on creating, even if you select "manual", it would switch/default back to automatic. So what I did was create the service as dynamic. then immediately went back in and set up the cache and "manually" created the cache. You didn't mention what version you were using, so I wanted to mention this in case you are running into that issue.

By the way, since you problem is solved, remember to mark you question as answered.

The utility to delete cached credentials is hard to find. It stores both certificate data and also user passwords.

Open a command prompt, or enter the following in the run command

Windows 7 makes this easier by creating an icon in the control panel called "Credential manager"

There is also a command-line utility:

(to see what you're connected to)

(to delete all connections)

net use info is not the same info as listed in keymgr or credential mgr.

FYI, I just encountered a case where a credential (possibly corrupt, since it showed up under an entry named with only two, odd Unicode characters) appeared only in the rundll32.exe keymgr.dll,KRShowKeyMgr interface, and not in the Credential Manager interface found in the Windows 7 control panel. So it may be worth checking both interfaces for cached credentials.

Force openlayers to not use browser cache for tiles refresh - Geographic Information Systems

The MapML (Map Markup Language) explainer

The W3C Maps for HTML Community Group is iterating on the problem space. You can contribute to the on-going discussion and documentation of Use Cases and Requirements for Standardizing Web Maps. Alternatively, if your organization is a member of the Web Platform Incubator Community Group (WICG) and you are able to contribute there but not elsewhere, please consider contributing through the WICG forum on Web mapping. We would love to hear from you.

Web maps are a well-established domain of Web design, and there exist popular, mature open and closed source JavaScript libraries to create and manage Web maps. JavaScript web maps are often containers for publicly available and funded open geospatial and statistical data. Yet despite established JavaScript libraries and server-side API standards, Web maps remain a complex Web niche that is difficult to learn, due to their extensive prerequisite knowledge requirements. As a result, there exists a community of Web map developers which contributes very little to the Web platform and which may possess little understanding that the Web exists as a distinct and standards-based platform. Similarly, the Web platform seems mostly oblivious to Web maps and their requirements, and provides no direct support for maps. In other words, Web maps existence in the Web platform depends on intermediaries which “abstract away” the Web platform.

The goal of this proposal is to bridge the gap between the two communities in a way that may have positive benefits for both sides. On the one hand, the Web mapping community is burdened by intermediaries and the consequent barriers to widespread creation and use of maps and public map information. On the other hand, the Web platform, especially the mobile Web, needs more and better high-level features and less JavaScript. Simple yet extensible Web maps in HTML, that equally leverage the other platform standards, is the feature that both communities need to come together to improve usability and accessibility for users.

Web maps today are created using a wide range of technology stacks on both the client and server, some standard, some open, and some proprietary. The complexity of choices and the wide variety of technologies required to create Web maps results in maps of highly variable usability and accessibility. This has in turn led to the creation of centralized mapping services, that may or may not be implemented using Web technology in some cases, mapping services which work well on desktop Web browsers mostly bypass the mobile Web through creation of mobile platform mapping apps, where the ‘rules of the Web platform’ (such as device permissions) do not apply. Some centralized mapping services, both on the Web but especially on mobile technology platforms, are constructed for the purpose of tracking the user’s location and their locations of (search) interest, and using that private location information to market and re-sell highly targeted advertising.

The problem to be solved, therefore, is to reduce the threshold complexity of creating accessible, usable and privacy-preserving Web maps, and to enable full use of Web platform standards such as HTML, URL, SVG, CSS and JavaScript in map creation, styling, presentation and interaction.

To solve the problem, our approach is to identify the Web map processing that is currently performed by JavaScript libraries which should instead be defined - in accordance with the HTML Design Principles - as elements and attributes supported by CSS, while at the same time, we identify the Web map processing that should remain in the JavaScript domain as a standardized DOM API. By building the core behaviour of maps and layers into HTML, Web authors who want to build simple maps into their pages can easily do so, supported by core platform technologies, with the power of JavaScript available to enhance the core map and layer behaviour.

By lowering the barriers for Web map authors in this way, we will improve the usability, and standardize the accessibility of Web maps. Through making map creation a matter of applying appropriately crafted Web platform standards, we will create the conditions to multiply the choices of mapping services offered to authors and users of the Web.

In improving the choices among mapping services available through the Web platform, we will enable the growth of services that offer alternate means of paying for maps other than in exchange for the user’s personal private information, and we will enable standardized Web map accessibility through addition of maps to HTML. Finally, by making it cheaper to create Web maps than to build mobile apps, we will improve the business rationale for choosing the mobile Web as a development platform, and in doing so we hope the (mobile) Web will benefit from increased ‘success’, or network effects.

  • Define the means to allow authors to create dynamic, usable and accessible Web maps about as easily as they can embed an image, a video or a podcast today.
  • Define and embed accessibility of map feature and location information into HTML for use by screen readers and other assistive technology.
  • Define and design security of map information considerations into the Web platform.
  • Define the markup to create mapping mashups that doesn’t necessarily require scripting or detailed mapping server technology knowledge i.e. that can be accomplished about as easily as linking to a document.
  • Simplify the use of public spatial data infrastructures (SDI), such as OpenStreetMap and national and international SDIs, by designing the integration of those services into the proposed Web platform mapping standards.
  • Defining and (advocate for) adding map-enabled HTML to the serialization formats available from existing spatial (map) content management systems, APIs and Web Services.
  • Interoperability with the operating model or availability of existing spatial (map) content management systems, APIs and Web Services. For example, the evolving OGC API standards.

The Extensible Web Manifesto calls for iterative development and evolution of platform features, starting with low-level ‘primitives’ and resulting eventually in high-level features. Although there are several low-level primitive proposals inherent or implicated in this proposal, overall this can be seen as a proposal for a high-level feature. That feature is declarative dynamic Web maps in HTML. Web mapping is a mature category of JavaScript library that is well into the stage of its development life cycle that some of the aggregate characteristics of those libraries should be incorporated into the platform. As such, this proposal captures some of the ‘cow paths’ of open and closed source JavaScript Web mapping libraries, as well as taking into consideration how to incorporate server-side mapping services and APIs.

The proposed extension would create a standard <map> widget that contains controls in a user agent shadow root, (similar to <video> today), with child <layer> elements which are in, and may contain, light DOM map-related markup (the vocabulary of which is also part of this proposal):

See the High-Level API explainer for details on the proposed elements and polyfill.

Detailed design discussion

Use Cases and Requirements

This proposal is being evaluated against the Use Cases and Requirements for Standardizing Web Maps, to identify gaps between the required functionality and the polyfilled behaviour.

See the MapML UCR Fulfillment Matrix for how MapML compares in capabilities in contrast to existing popular web mapping libraries.

W3C/OGC Joint Workshop on Maps for the Web

Natural Resources Canada hosted the 2020 W3C/OGC Joint Workshop Series on Maps for the Web in cooperation with the Maps for HTML Community Group.

Considered alternative designs of MapML

TBD - we have considered many alternatives, I have just run out of steam to document them, at the moment. Also this document is already quite long. As things progress, I will add content here.

    - is it possible to merge the SVGMap proposal and this proposal? Or are they competing proposals?
  • APIs: Leaflet, OpenLayers and others, (albeit others without any notion of cross-origin resource sharing) provide excellent map scripting APIs and events. Can these or similar APIs be built on top of the proposed HTML infrastructure? Would life be simpler for authors with the proposed HTML?
  • Status quo

Stakeholder Feedback / Opposition

Some participants have said we should start over, because of the sunk costs fallacy that doesn’t seem to be in the spirit of iteration, nor hopefully is it correct to see this proposal as a waste of energy or money. A better strategy would be to solicit concrete, actionable and incremental change requests. It is in that spirit that the current explainer is offered.

The objective of this project is to get Web browser projects to agree that Web maps as a high level feature are desirable and that the proposal is implementable, and then to implement and ship the feature. To get there from here, we need browser developer participation, the absence of which appears equivalent to opposition. So, there is work to do.

References and Acknowledgements

Contributions, advice and support from the following people are gratefully acknowledged:

Benoît Chagnon, Brian Kardell, Michael tm Smith, Robert Linder, Joan Masó, Keith Pomakis, Gil Heo, Jérôme St-Louis, Amelia Bellamy-Royds, Nic Chan, Nick Fitzsimmons, Simon Pieters, Tom Kralidis, Daniel Morissette, Chris Hodgson, Ahmad Yama Ayubi, Bennett Feely, Doug Schepers

If I’ve forgotten to mention you, please open an issue.

Errors and omissions are certainly my own if you spot a correction needed in the above, please open an issue.

Can I provide map corrections? Yes, please let me know by Email with as much detail as you can provide and I’ll endeavour to change the map. Please provide references to source documents or maps if able, but local knowledge is also appreciated. If you want to sketch out the map, or even provide a KML that would be great. I normally try and respond within a couple of weeks, but if the workload is heavy or I'm on hols it may take me longer.

Can I email you? Yes to [email protected] I try and respond to all questions and suggestions within a couple of weeks. Sometimes I can’t respond because mail servers and spam-filters result in a block. If you can’t get through by email then try Facebook or Twitter.

Can I have a copy of the map and its source data? No, not at present. I’m currently restricting access via the website itself, so no source data is publicly available (I may make it available to buy in the future). However, if you have interest in data for a specific small area then I might be able to help. For those that only need modern railway data then Open Street Map data is freely available (Geofabrik) and you can download and display the data in GIS software such as Google Earth.

Can I use images of the map? Yes, but within normal copyright rules - i.e. only use a reasonable number of images, don’t copy the whole thing, don't resell (see UoY Guidance). If in doubt ping me an email for permission. Please include my copyright statement (as shown at the bottom of the page) in any image, and please provide a reference back to RailMapOnline. Also note that the background mapping imagery is also subject to copyright (e.g. from Google) so make sure to honour their terms as well.

Why doesn’t it work on my browser/phone? I try and test the website with different browsers, but I’m afraid that older browsers (e.g. Internet Explorer) aren’t supported. Please upgrade to a modern browser if you’re still using IE! I can’t test different phones and operating systems – if you find a problem please let me know with as much information as you can provide.

Why has it stopped working? I’m afraid being a free hobby site I can’t guarantee the service. In particular I’m reliant on my web hosting company (who are very reliable) and Google for rendering the maps (who occasionally have issues). Let me know if there’s a problem and I’ll try and look into it as soon as possible. There are also some security constraints imposed by Google that may stop access from far-eastern internet providers or known spam-servers.

Why are square tiles missing? Basically, Google renders the railway/canal maps and displays them in your browser as square tiles. Sometimes tiles don’t get provided. I’m afraid I can’t fix this as it’s a Google issue. Try reloading the page, or failing that emptying your browser cache.

Why is part of the map (a whole region) missing? Sometimes there is a delay between Google requesting the map and my webhost responding, and Google times out. When this happens, Google doesn’t try again and a large part of the map won’t be displayed. This should reset the next day, but if the problem persists then let me know.

What happened to the historic OS background maps? These backgrounds were provided by National Library of Scotland, who unfortunately now charge (a considerable sum) for these services. For the moment this means I have had to remove the backgrounds from the site.

Can I give you money to contribute to the running costs? Yes, you can buy me a coffee at Ko-fi which will help with the website running costs, keep the website advert free, and maybe buy some morale tokens. And words of support are always appreciated! Unfortunately I can't promise any additional services in return for a donation.

Can I link to the map? Yes, please do. You can create a link to a specific location by right-clicking on the map and selecting create link. If you want to create links from your own database or website then get in touch and I might be able to create a special marker and popup to go with your link.

Can I suggest a new idea for the website? Absolutely – I’m always interested in feedback. However, as it's just me then don’t be offended if I can’t incorporate your ideas.

Is there an App? No, only the website at present. You should be able to access the map through any browser on your phone, but I appreciate that some functionality can be difficult on a mobile. I have tried to reduce the data that the site uses to reduce your data charges, but the mapping imagery itself will always require and online connection.

What is your data source? I use out-of-copyright mapping and freely available online sources for all information on railways and routes. I also make use of information in Wikipedia, forums and other websites, and will try and link to them from the map. Also the information that you have provided over the years has been invaluable for those railways with scarce mapping or where only local knowledge can identify locations.

What railways are depicted? All railways, from all time periods. If it runs on rails and you can ride it then I want to include it. Not included are fairground rides (e.g. rollercoasters), temporary construction railways, model railways too small to ride, cable cars (no rails!) and underground mine railways (too difficult!) – unless I decide to include them.

What time period is depicted? All time periods, at once, on the same map. I realise that can result in a very busy map, but unfortunately I didn’t think of that when I started!

What do the colours mean? For the UK railway map, the colour identifies the owning company pre-grouping (circa 1923). For the US railroad map, the colour identifies the company that built the line. I have had trouble identifying US builders and some of the UK lines, so let me know if there are errors. There are some compromises with such a big map: Small private owner sidings are coloured for the track they join to. Many small industrial tramways are all coloured the same rather than being individually identified. In the UK, later (post 1923) tracks are coloured as if they were pre-grouping. In the UK, earlier (closed prior to 1923) tracks are coloured as if they still existed. In the UK, post BR (1948) tracks are identified as a separate dark-grey colour, but this is only for significant additions and every siding/junction isn’t picked out.

Why is a siding missing, or why isn’t a route shown as double track? I don’t include every sidings/spur, but rather try to represent the extent of stations/yards and the different routes available. Double/triple track isn’t differentiated from single track, unless the different tracks significantly diverge.

What stations are included? Currently only the UK railway map has stations. All stations are included, including where they’ve been re-sited. Station names change a lot, so I’ve tried to include text that represents all the variations. When searching for stations I suggest you use the * wildcard at the beginning of your search term to take account of different names.

What features are included? Currently only the UK railway and canal map have features, and more are being added all the time. Included are significant features including industries, junctions, bridges and tunnels. Names sometimes change, so alternatives are provided in brackets and I suggest you use the * wildcard at the beginning of your search term. Signal boxes and station features aren’t currently included.

Why is the legend incomplete? If you spot a missing legend entry on the maps then please let me know – it should be complete!

How accurate is the map? I try to depict tracks so that you can easily find their location on the ground or on satellite imagery, and I have an aim of placing tracks within the right of way. My original aim was to help identify tracks when out exploring the landscape. However, there will be errors in my maps, and when overlaying them on imagery and different map backgrounds then those maps may have errors as well. Some areas, particularly in the US remain approximate, but I do try and refresh areas periodically. Bottom line – do not use the map for navigation or planning – use it as a starting point for your own research.

Can I contribute? If you’d like to provide inputs and corrections then please get in touch.

Can I advertise on the site? No. I aim to always keep RailMapOnline advert-free. However, if you’re a historical society, heritage railway, miniature railway operator or museum who’d like to add a link to your track depicted on the map then get in touch.

Why does the background map not have a place name or show a particular feature? The background maps are outside of my control, and you will have to contact the map providers to report a problem.

Can you fix an error on the Modern Rly layer? The Modern Rly layer is a direct copy of freely available OpenStreetMap data, and I make a new copy about once per year. I don't edit the layer or check it, and I'm afraid I don't have the capacity to make edits (my focus is on the Historic layers). However, anyone can help out with OpenStreetMap and provide edits, so if you want to get involved and start making your own maps then check them out.

Is there a secure version of the website (HTTPS)? Yes. You should automatically be redirected to the secure HTTPS site.

Why isn't geolocation showing my position? You need to be using the secure site (HTTPS), have switched on your device's location, and your browser may also require you to provide permission for the webpage to know your location. The accuracy of the position depends on your device's capability and whether services like GPS are available.

Do you collect data on me? No, I don’t use cookies or collect any data on how you use the website. More information here. If you get in touch with me by email or message me on Facebook or Twitter, I won’t use your contact details for marketing or pass it on to a third party.

Force openlayers to not use browser cache for tiles refresh - Geographic Information Systems

<% include version="2.24" %>Set to 1 to enable cluster features or 0 for single node setups.

<% include version="2.24" %>Define generic url pattern to connect all cluster nodes. Each cluster node must be available on the given address. 3 variables will be replaced to make this url generic: - $hostname$: hostname from hostname - $url_prefix$: contains the url prefix from url_prefix - $proto$: trying to autodetect either http or https , autodetect will only work with OMD and falls back to http otherwise.

<% include version="2.24" %>Set timeout after which a node is removed from the cluster.

DEPRECATED: setting this has no effect with Thruk 2.34 or later.

The rest api is enabled by default, disabling it would break Thruk operation.

<% include version="2.24" %>Using api keys can be disabled by setting this to 0.

Note: this value cannot be overriden on a per user/group basis because it is used on pre-authentication stage. If you want users to create new keys, use max_api_keys_per_user .

<% include version="2.32" %>Limit amount of keys a user may create. Set to 0 to disable creating new keys completly

Specify user agents which will be redirected to the mobile plugin (if enabled).

Default theme to use for all users. Must be a valid sub directory in the themes folder.

Set first day of week. Used in reports. Sunday: 0 Monday: 1

Large reports will use temp files to avoid extreme memory usage. With 'report_use_temp_files' you may set the report duration in days which will trigger the use of temp files. Default is 14days, so for example the 'last31days' report will use temp files, the 'thisweek' not. Can be disabled by setting to 0.

Don’t create reports with more hosts / services than this number. The purpose is to don’t wrack the server due to extended memory usage. Increase this number if you hit that limit and have plenty of memory left.

Include messages with (program messages) in reports. Setting this to 0 allows the MySQL backend to use indexes efficiently

Should thruk update the logcache databases before running reports? Setting this to 0 reduces the time taken to run reports but the most recent data is not necessarily available. If you use this option you should probably create a cron to run "thruk -a logcacheupdate"

This link is used as startpage and points usually to the main.html with displays version information and general links.

This link is used whenever you click on one of the main logos. By default those logos are the Thruk logos and the link will take you to the Thruk homepage. Replace this with where you want your home location to be.

This link is used in the side navigation menu as link to the documentation. Replace with your documentation location. Set it to a blank value if you don’t want a documentation link in the menu at all.

Customizable link for the 'problems' link in side menu. Can be useful to reflect your companies process of error handling.

List of allowed patterns, where links inside frames can be set to. You can link to /thruk/frame.html?link= Your wiki will then be displayed with the Thruk navigation frame. Useful for other addons, so they don’t have to display a own navigation.

Maximum memory usage (in MB) at which a Thruk process will exit after finishing its request. Only affects the fcgid daemon.

Set this if a contact should be allowed to send commands unless defined for the contact itself. This is the default value for all contacts unless the user has a can_submit_commands setting in your monitoring configuration.

Use this to disabled specific commands. Can be use multiple times to disabled multiple commands. The number can be found in the 'cmd_typ' cgi parameter from links to the command page. If you only want to allow a few commands, use command_enabled instead. You may use ranges here. If you want to disable all commands, you can use command_disabled = 0-999 or set the authorized_for_read_only role.

See a list of available commands along with their ids on the commands page.

Enable only specific commands. Overrides command_disabled setting by only allowing a few specific commands and disabling all others. The syntax is the same as in command_disabled . When using command_enabled then all commands are disabled and only those from command_enabled can be used.

See a list of available commands along with their ids on the commands page.

Convert authenticated username to lowercase.

Convert authenticated username to uppercase.

Convert authenticated username by regular expression. The following example removes everything after an @ from the authenticated username and '[email protected]' becomes just 'user'.

When set to a true value, every contact will only see the hosts where he is contact for plus the services where he is contact for. When disabled, a host contact will see all services for this host regardless of whether he is a service contact or not.

Allow specific hosts to bypass the csrf protection which requires a generated token to submit certain post requests, for example to send commands. Use comma seperated list or multiple configuration attributes. Wildcards are allowed.

Disable the possibility for a user to change his password. Only works with htpasswd passwords. To make this work you have to set a htpasswd entry in the Config Tool section.

Sets the minimum lenght a password must have for users changing their passwords. Admins still can change the password any way they want in the config tool. This just affects the user password reset.

<% include version="2.36" %>Show the basic auth user / password formular. Enabled when using cookie auth. You may want to disable this if you only use oauth2 authentication.

The path to your cgi.cfg. See cgi.cfg for details.

The path to your log4perl configuration file.

verbosity / debug level same as setting THRUK_VERBOSE environment.

0 = info / warnings (default)

3 = enables performance debug output for each request (same as THRUK_PERFORMANCE_DEBUG=3 in env)

Enable author tweaks. Same as setting THRUK_AUTHOR environment. Only required for development, disables caches, enables template strict mode and more.

If a page takes longer to render than this amount of seonds, a profile will be logged. Set to 0 to disable logging completely.

Set level of machine information send in bug reports.

Possible options: - prod contains release information (default) - full contains uname and release information - none no information

Defines an optional seperate logfile with some extra audit relevant log entries. The different categories can be used to enable/disabled specific messages. The logfile can use strftime format pattern to for ex.: add the timestamp to the logfile.

Path to your plugins directory. Can be used to specify different location for you Thruk plugins. Don’t forget to set appropriate apache alias or rewrite rules when changing the plugin path. Otherwise the static content from plugins is not accessible.

Example redirect rule for apache:

Url to Thruks plugin registry. The url must supply a json data structure with a list thruk plugins. Can be specified multiple times.

Path to your themes directory. Can be used to specify different location for you Thruk themes. Don’t forget to set appropriate apache alias or rewrite rules when changing the themes path. Otherwise the static content from your themes may not accessible.

Path to the var directory. Thruk stores user specific date here.

Path to a temporary directory. Defaults to /tmp if not set and usually this is a good place.

The path to your ssi (server side includes) files. See Server Side Includes for details.

Specify a additional directory for user supplied templates. This makes it easy to override thruks own templates. Template search order is:

Changes the path to your logo images. Default is $url_prefix+'thruk/themes/'$current_theme'/images/logos/' and therefor relative to the current selected theme. You could set a fixed path here. Like usual, paths starting with a / will be absolute from your webserver root directory. Paths starting without a / will be relative to the cgi directory.

Location of your logos in your filesystem. This directory should be mapped to your 'logo_path_prefix' directory where 'logo_path_prefix' is the path relative to your webserver root directory and 'physical_logo_path' is the corresponding filesystem path.

Mode used when creating or saving files.

Mode used when creating folders

Set a general resource file. Be warned, if any macros contain sensitive data like passwords, setting this option could expose that data to unauthorized user. It is strongly recommended that this option is only used if no passwords are used in this file or in combination with the 'expand_user_macros' option which will limit which macros are exposed to the user. Instead of using a general 'resource_file' you could define one file per peer in your peer config.

Search long_plugin_output in default search, ex. from the side navigation. It is enabled by default, but can have significat performance impact in larger setups.

<% include version="1.86-2" %>The default_service_filter set a default service filter which is used when no other filter is applied (except from links to hosts or groups). The filter is negated by a leading exclamation mark. The example filters out all services starting with "test_". You can use regular expressions. The Default is not set.

Using the pager will make huge pages much faster as most people don’t want a services page with 100.000 services displayed. Can be disabled if you don’t need it.

Define the selectable paging steps. Use the * to set the default selected value.

Just like the paging_steps, but only for the groups overview page.

Just like the paging_steps, but only for the groups summary page.

Just like the paging_steps, but only for the groups grip page.

Cut off objects on problems page, set 0 to disable limit completly. Defaults to 500.

Change path to your host action icons. You may use relative paths to specify completely different location. You also may want to use 'action_pnp.png' when using pnp. Icon can be overridden by a custom variable '_ACTION_ICON'.

Change path to your service action icons. You may use relative paths to specify completely different location. You also may want to use 'action_pnp.png' when using pnp. Icon can be overridden by a custom variable '_ACTION_ICON'.

Set whether you want to use a framed navigation or not. With using frames it’s sometimes easier to include addons. See allowed_frame_links option for how to integrate addons.

Show the new split command box on the host / service details page.

what email address bug reports will be sent to

Default timeformat. Use POSIX format.

Default trends timeformat.

Default timeformat for todays date. Can be useful if you want a shorter date format for today.

On which event should the comments / downtime or longpluginoutput popup show up. Valid values are onclick or onmouseover.

Options for the popup window used for long pluginoutput, downtimes and comments. See for what options are available

Display the current number of notification after the current / max attempts on the status details page.

<% include version="2.14" %>List of default columns on host details page. Determines which columns and the order of the displayed columns. See an example on the Dynamic Views page.

<% include version="2.14" %>List of default columns on service details page. Determines which columns and the order of the displayed columns. See an example on the Dynamic Views page.

<% include version="2.38" %>List of default columns on overview details page. Determines which columns and the order of the displayed columns. See an example on the Dynamic Views page.

<% include version="2.38" %>List of default columns on grid details page. Determines which columns and the order of the displayed columns. See an example on the Dynamic Views page.

Display the backend/site name in the status table. This is useful if you have same hosts or services on different backends and need to know which one returns an error. Valid values are:

Show links to config tool for each host / service. You need to have the config tool plugin enabled and you need proper permissions for the link to appear.

Display the full command line for host / service checks . Be warned, the command line could contain passwords and other confidential data. In order to replace the user macros for commands, you have to set the 'resource_file' in your peer config or a general resource_file option.

0 = off, don’t show the command line at all

1 = show them for contacts with the role: authorized_for_configuration_information

2 = show them for everyone

<% include version="2.18" %>Replace pattern for expanded command lines. Could be used to replace sensitive information from beeing displayed in the gui. The pattern is a simple perl regular substitute expression in the form of '/pattern/replacement/'

Usually the source of your expanded check_command should be the check_command attribute of your host / service. But under certain circumstances you might want to use display expanded commands from a custom variable. In this case, set 'show_full_commandline_source' to '_CUST_VAR_NAME'.

Show additional logout button next to the top right preferences button. (works only together with cookie authentication)

<% include version="2.42" %>Change url of logout link. Might be useful in combination with oauth.

When a plugin returns more than one line of output, the output can be displayed directly in the status table, as popup or not at all. Choose between popup, inline and off

Color complete status line with status colour or just the status itself.

Show if a host / service has modified attributes.

Show host / service contacts. User must have the configuration_information role.

Show check attempts for hosts too. The default is to show them on the problems page only. Use this value to force a value.

This option enables a performance bar inside the status/host list which create a graph from the performance data of the plugin output. Available options are 'match', 'first', 'all', 'worst' and 'off'.

Show pnp popup if performance data are available and pnp is used as graph engine. The popup will be available on the performance data bar chart on the right side of each host/service. It uses the normal pnp popup logic, so you need to install the proper SSI files.

If set, a Internet Explorer (IE) compatibility header will be added to the html header.

Defines the order to determine the worst/best states. Used in business processes and the panorama dashboard. Can be overriden in those plugins.

Show inline pnp graph if available. If a service or host has a pnp4nagios action or notes url set. Thruk will show a inline graph on the extinfo page. This works for /pnp4nagios/ urls and /pnp/.

graph_word is a regexp used to display any graph on the details page. if a service or host has a graph url in action url (or notes url) set it can be displayed by specifying a regular expression that always appears in this url. You can specify multiple graph_words.

When using pnp4nagios, no graph_word is required, just keep it empty.

sample service configuration for graphite:

Quotes are supported in the action_url statement, you may want to use it for special graphite function ( Do not escape double quotes here, otherwise graph won’t work.

graph_replace is another regular expression to rewrite special characters in the url. For example graphite requires all non-word characters replaced by underscores while graphios needs spaces removed too. You can use this setting multiple times.

sample service configuration for graphite:

sample service configuration for graphios:

The http_backend_reverse_proxy will proxy requests for pnp or grafana action_urls via the http backend if possible. This only works for http backends and if cookie auth is enabled. Can be used to proxy thruk nodes (experimental)

Possible options: - 0 disabled - 1 enabled

Show custom vars in host / service ext info. List variable names to display in the host and service extinfo details page. Can be specified more than once to define multiple variables. You may use html in your variables. Use * as wildcard, ex.: _VAR* To show a host custom variable for services, prepend _HOST, ex.: _HOSTVAR1. To show all host variables in the service view, use wildcards, ex.: _HOST* Host variables are only used with HOST*, not by * alone, see examples.

Expose custom vars sets a list of custom variables which is safe for all users/contacts to view. They will be used in filtering and column selection as well as in json result sets. Basically they will be handled the same way as show_custom_vars except they will not be displayed automatically. Syntax is the same as show_custom_vars .

Expand user macros ($USERx$) for host / service commands and custom variables. Can be specified more than once to define multiple user macros to expand. Be warned, some user macros can contain passwords and expanding them could expose them to unauthorized users. Use * as wildcard, ex.: USER*

Defaults to 'ALL' which means all user macros are expanded, because its limited to admin users anyway.

Show link to bug reports when internal errors occur. Set to '1' to show a error icon which links to a error report mail. Set to 'server' to log js error server side. Set to 'both' to log server side but still show the icon.

ex.: show_error_reports = both

don’t report some known harmless javascript errors

ex.: skip_js_errors = cluetip is not a function

Normally passive checks would be marked as disabled. With this option set, disabled checks will only be displayed as disabled if their last result was active. Otherwise they would be marked as passive checks. This option also changes the passive icon only to be shown when the last check was passive, otherwise the disabled icon will be displayed.

Normally passive checks whould be displayed with a passive icon if their last result is passive. With this option, passive icon will be hidden in status details.

The sitepanel is used to display multiple backends/sites at a glance. With than 10 or more sites, the list of backends will be combined into the 'compact' site panel which just displays the totals of available / down / disabled sites. The 'compact' panel will also automatically be used if you use sections. With more than 50 backends, the 'collapsed' panel will be selected in 'auto' mode. With more than 100 backends, the 'tree' panel will be selected in 'auto' mode. Set sitepanel to list/compact/collapsed/tree/auto/off to change the default behaviour.

You can integrate the output of apache status into Thruk. The following list of apache status pages will be accessible from the performance info page. Make sure the page is accessible from Thruk, credentials will be passed through. So both, basic authentication or ip based authentication would be possible. Read more about Apaches mod_status here:

DEPRECATED: please use LMD when using multiple backends.

Set logging of backend in verbose mode. This only makes sense when debug logging is activated.

Use connection pool when accessing multiple sites. Increases the performance because backends will be queried parallel but uses around 10mb of memory per pool member. Disabled when set to 0, number of concurrent connections otherwise.

<% include version="2.12" %>Enable lmd connection handling. Set to 1 to enable. LMD handles all backend connections in a separate process which will be started automatically with thruk if enabled. Read more on lmd at: or here LMD.

Path to additional lmd configuration. The sites will be automatically generated. Can be used multiple times.

Set some extra command line options when starting lmd.

Thruk waits this timeout for lmd to respond, otherwise it gets killed and restarted. Set to 0 to turn off automatic restarts (it will still be started if it is not running).

Instead of using LMD managed by Thruk, you can run your own LMD and let Thruk use that one instead

Enables caching logfiles for faster access and less memory usage for the naemon process. Cache supports only Mysql. Prefered type is Mysql. Format is a Mysql connection string like 'mysql://hostname:port/db'. Using a cache dramatically decreases cpu and memory usage of Thruk and Naemon when accessing logfiles, for example when creating reports.

<% include version="2.10" %>Define filter which prevents the logcache from overgrowing with useless log messages. Since the main reason for the logcache are availability reports it is ok to remove some entries. Can be used multiple times.

<% include version="2.12" %>This option enables/disables the delta updates of the logcache whenever somebody opens a page which requires logfiles, ex.: the showlog page. This improves the responsiveness of the page but you miss the latest log entries since the last manual update.

When having multiple sites, you can change the number of parallel updates with the logcache_worker option. Setting worker number to 1 disables parallel execution.

Default duration when running thruk logcache clean .

Default duration when running thruk logcache compact . Compact removes duplicate alerts having the same state. It also removes basically everything not required for sla reports and keeps a few extras like notifications.

Define wether logcache will be bypassed if the start / end time of a log querys is outside the range of the cache.

0 : never, only use cached logs and return empty result if outside cached range. (default)

1 : partially, bypass logcache if start and end are outside cache range, otherwise return partialy result.

2 : always, bypass logcache if either start or end are outside the cache range.

The import command replaces the builtin logcache update with an external script which is then responsible for updating the logcache database. This might be useful if you pull the logfiles from a ndo/ido database and then manually import those files.

There are some useful enviromenet variables set before the script is started:

standard macros as listed in CLI Environment

THRUK_BACKENDS is a semicolon separated list of the selected backends.

THRUK_LOGCACHE is the connection string to the thruk logcache database.

THRUK_LOGCACHE_MODE is either 'import' on first initial import or 'update' for further consecutive updates.

The fetchlogs command is very similar to the logcache_import_command but it replaces only the the fetching logs part from the bultin logcache. This script should return the plain text logfiles on stdout (standard naemon/nagios logfile format). This might be useful if you pull the logfiles from a ndo/ido database.

When having mixed backend cores, this command can be overridden in the peer configuration.

See ./support/ for an example.

There are some useful enviromenet variables set before the script is started to control which logs should be fetched:

REMOTE_USER contains the current user.

THRUK_BACKEND is a the id of the backends to import.

THRUK_LOGCACHE_START is start date to fetch

THRUK_LOGCACHE_END is the end date to fetch

THRUK_LOGCACHE_LIMIT is the optional limit of logfiles to fetch

If you are using a mysql database with galera replication such as MariaDB Cluster, Percona XtraDB Cluster or Galera Cluster it is a good idea to avoid locks and optimize/repair table statements since they are not properly replicated.

Especially in Percona XtraDB Cluster > 5.6 the default setting of pxc_strict_mode will disable locks all togheter.

This setting will make the logcache work in that case. More information about pxc_strict_mode available here: - Percona documentation

Delay the page delivery until the backends uptime is at least this amount of seconds. Displaying pages soon after backend restarts may display wrong results and all services are pending. Enable this if you experience problems with pending services after reloading your backend. Should be obsolete with Livestatus versions greater than 1.2 ex.: setting this to 10 would start serving pages 10 seconds after the backend reload

Can be set to enable / disable hostname verification for https connections. For example for the cookie login, https backends or oauth requests. It is not recommended to disabled hostname verification, set ssl_ca_path or ssl_ca_file instead.

Sets path to your certificates. Either set ssl_ca_path or ssl_ca_file . Not both. Defaults to ssl_ca_file = Mozilla::CA::SSL_ca_file() if Mozilla::CA perl module is installed or ssl_ca_path = '/etc/ssl/certs' otherwise.

Sets path to your ca store. See ssl_ca_path for details.

Cookie Authentication Settings

Specifies the url where non-authenticated users will be redirected too.

Specifies the url against the cookie auth provider will verify its credentials.

Specifies the timeout for idle sessions. Session will be removed if not used within this timeperiod.

Specifies the amount of seconds in which subsequent requests won’t verify authentication again. Set to zero to disable storing hashed credentials in the filesystem and disabling revalidation of active sessions.

Timeout for internal sub request on authentication url. Defaults to 10 seconds and can be disabled by setting it to zero.

Cookie domain is usually set automatically. Use this option to override the default value. Domains have to contain at least two periods. Useful for single sign on environments.

Hook script which is called on every successful login. The REMOTE_USER environment variable will be set to the username of the current logged in user. Useful to do magic stuff on each login. The REMOTE_USER_GROUPS environment variable contains semicolon separated list of contactgroups. Available standard environment variables are listed on the CLI Environment page.

<% include version="2.12" %>Disable account after this number of failed login attempts. This feature will be disabled if set to zero.

<% include version="2.46" %>The error message when an account is locked, may contain html.

<% include version="2.32" %>Increase logging of cookie authentication related things. This usually gets printed to the apache error log.

OAuth2 Authentication Settings

When the oauth provider needs to configure an allowed callback url, set the url of the login page, ex.:

or without <omdsite> when not using OMD.

Set oauth (oauth2) authentication provider

Set the default checked state for command options.

Forces acknowledgments to be sticky.

Forces sending a notification for acknowledgments.

Forces comments on acknowledgments to be persistent.

Forces normal comments to be persistent.

Default duration of new downtimes in seconds. Default is 2 hours.

Maximum duration of new downtimes. Use quantifiers like d=days, w=weeks, y=years to set human readable values. Default is unlimited.

Default duration of acknowledgements with expire date. Default is one day.

Configure which commands should be available as quick status commands.

When you want to reschedule passive checks for which the result is fetched by an agent (For example check_mk or some scenarios of check_multi). You usually want to reschedule the agent instead of the passive check.

The command reschedule alias can be used to translate the reschedule command from the passive service to the active agent service.

The pattern will be tested against the service description and the command_name of the passive check.

The resulting service name be on the same host and the contact must be authorized for that service too.

The pattern must be a valid perl regular expression.

Duplicates will be removed. So if you reschedule 10 services which result in the same master service will only trigger one reschedule.

Only passive services will be translated

In this example, all passive check_mk checks will trigger the active agent check and therefor allow you to reschedule passive checks directly from the problems page.

Use recurring downtime, shows recurring downtime links.

Use service’s description instead of display name.

Use trends, shows trend links.

Waiting is a livestatus feature. When enabled, Thruk will wait after rescheduling hosts/services checks until the check has been really executed up to a maximum of 10 seconds. Adjust the time waiting with the 'wait_timeout' option.

Amount of seconds to wait until a rescheduled check finishes. Thruk will wait this amount and display the result immediately.

If set to 1, the user has to enter a comment for all disable active checks / disable notifications / disable event handler commands. These comments are automatically prefixed with the command name and will be deleted when checks / notifications / handlers are enabled again. They are also used by the 'reenable_actions' utility.

Specify a file which is then completely under the control of Thruk. It will be used to store cronjobs, ex. for reports. The file has to be writable by Thruk.

The pre edit cmd can be used to do run a command just before Thruk will edit the crontab.

The post edit cmd is necessary for OMD where you need to reload the crontab after editing or for replacing the users cron with the edited file.

Path to your thruk executable. Will be used in cronjobs.

<% include version="1.86" %>The Action Menu is a way to create custom icons and menus for every host or service. There are two ways to set the menu. First one is to directly assign the menu json data into the _THRUK_ACTION_MENU custom variable of your host or service. Or you can just put a placeholder into the _THRUK_ACTION_MENU custom variable and define the actual menu in 'action_menu_items'. You may add multiple action icons or even multiple menus for each host or service.

See the Action Menu section from the advanced topics for more examples and details.

<% include version="1.86" %>Defines the menu used by placeholders from the '_THRUK_ACTION_MENU' custom variable. The menu is a key/value pair with the name and the menu description in json format. The menu can either be a single icon/menu or a list of menus and icons.

A simple menu could look like this, note that the menu has to be in a single line without newlines and all newlines from the example have to be removed in order to try that. But its more readable this way. You can also use a trailing backslash to write the menus on multiple lines.

Sample menu with two items and a seperator:

A menu has the following attributes:

icon icon for the menu itself. You can use <% raw %><> <% endraw %>as placeholder in the url and <% raw %><> <% endraw %>for the user name. Within OMD, the the site variable <% raw %><> <% endraw %>must be prepended.

title title of the menu, will be display on mouse over.

menu the actual menu definition as a list '[…​]' of sub items.

…​ arbitrary attributes will be used as attributes of the menu icon html element.

A single "-" item can be used as a menu item seperator.

The menu item can have the following attributes:

icon icon for the menu item. You can use <% raw %><> <% endraw %>as placeholder in the url. Within OMD, the the site variable <% raw %><> <% endraw %>must be prepended.

label label name of the menu item.

menu list of sub menu items.

action url or action which will be run or openend. This can either be a http(s) link or a serveraction in the form server://actionname/argument1/argument2/…​ the actionname must be a reference to a command from 'action_menu_actions'. You may use <% raw %><> <% endraw %>here too. Also javascript: links are valid, for example javascript:alert('$HOSTNAME

Autofill with Authenticator

Q: What is Autofill with Authenticator?

A: The Authenticator app now securely stores and autofills passwords on apps and websites you visit on your phone. You can use Autofill to sync and autofill your passwords on your iOS and Android devices. After setting up the Authenticator app as an autofill provider on your phone, it offers to save your passwords when you enter them on a site or in an app sign-in page. The passwords are saved as part of your personal Microsoft account and are also available when you sign in to Microsoft Edge with your personal Microsoft account.

Q: What information can Authenticator autofill for me?

A: Authenticator can autofill usernames and passwords on sites and apps you visit on your phone.

Q: How do I turn on password autofill in Authenticator on my phone?

A: Follow these steps:

  1. Open the Authenticator app.
  2. On the Passwords tab in Authenticator, select Sign in with Microsoft and sign in using your Microsoft account. This feature currently supports only Microsoft accounts and doesn't yet support work or school accounts.

Q: How do I make Authenticator the default autofill provider on my phone?

A: Follow these steps:

Open the Authenticator app.

On the Passwords tab inside the app, select Sign in with Microsoft and sign in using your Microsoft account.

  • On iOS, under Settings, select How to turn on Autofill in the Autofill settings section to learn how to set Authenticator as the default autofill provider.
  • On Android, under Settings, select Set as Autofill provider in the Autofill settings section.

Q: What if Autofill is not available for me in Settings?

A: If Autofill is not available for you in Authenticator, it might be because autofill has not yet been allowed for your organization or account type. You can use this feature on a device where your work or school account isn’t added. To learn more on how to allow Autofill for your organization, see Autofill for IT admins.

Q: How do I stop syncing passwords?

A: To stop syncing passwords in the Authenticator app, open Settings > Autofill settings > Sync account. On the next screen, you can select on Stop sync and remove all autofill data. This will remove passwords and other autofill data from the device. Removing autofill data doesn't affect multi-factor authentication.

Q: How are my passwords protected by the Authenticator app?

A: Authenticator app already provides a high level of security for multi-factor authentication and account management, and the same high security bar is also extended to managing your passwords.

  • Strong authentication is needed by Authenticator app: Signing into Authenticator requires a second factor. This means that your passwords inside Authenticator app are protected even if someone has your Microsoft account password.
  • Autofill data is protected with biometrics and passcode: Before you can autofill password on an app or site, Authenticator requires biometric or device passcode. This helps add extra security so that even if someone else has access to your device, they can't fill or see your password, because they’re unable to provide the biometric or device PIN input. Also, a user cannot open the Passwords page unless they provide biometric or PIN, even if they turn off App Lock in app settings.
  • Encrypted Passwords on the device: Passwords on device are encrypted, and encryption/decryption keys are never stored and always generated when needed. Passwords are only decrypted when user wants to, that is, during autofill or when user wants to see the password, both of which require biometric or PIN.
  • Cloud and network security: Your passwords on the cloud are encrypted and decrypted only when they reach your device. Passwords are synced over an SSL-protected HTTPS connection, which helps prevent an attacker from eavesdropping on sensitive data when it is being synced. We also ensure we check the sanity of data being synced over network using cryptographic hashed functions (specifically, hash-based message authentication code).

(Re)presenting the data at a street scale

With the crime data attributed to the street network, it was possible to use these features as the unit of display for a revised cartographic style. Relative rates of crime were calculated as the frequency of crimes within a category divided by the total street segment length within each Thiessen Polygon. These ratios were multiplied by a thousand to convert the rates into crimes per kilometre. As part of this process, all individual streets within each Thiessen Polygon were combined into a single feature, otherwise rates would appear artificially high on those composite street segments with shorter lengths as denominators would be smaller. Furthermore, given that information about which streets crimes were actually located on was lacking, the re-appropriation of the point data back into the zonal geography should only be used to apply styles to the whole Thiessen Polygon zone, again to avoid those issues of spurious precision that are exhibited by the point data.

The crime attributed street network data and rates were stored and processed within the PostGIS database. For display, these data were coupled with the map-rendering engine Mapnik11 11 See
that enabled the generation of map tiles with custom cartography. Using OpenLayers12 12 See
as a map interface, new cartography was developed and displayed on top of a neutral feature background map.

Two cartographic options were enabled to reflect the rates of crime: the first scaled the widths of the street network (see Figure 3a), and the second altered the street network by colour intensity (see Figure 3b). With regard to colour selection, a ColorBrewer Yellow-Orange-Red sequential nine step colour ramp was chosen for its aesthetic appeal and accessibility (Harrower and Brewer 2003 ). The purpose of showing different visualisation was to enable these to be evaluated by stakeholders at a later stage, enabling different styling options to be easily demonstrated. The ability to adapt cartographic styles, including other advanced features such as the selection of a scaling factor to adjust line widths or colour intensity, was only available on the full map view. We argue that in both the line width and coloured street cartographic styles, these representations hold greater utility for interpretation than the display of points, and convey the lack of spatial accuracy due to disclosure control in a more appropriate way. At present the website excludes those crimes that were recorded at those non-street centroids that were more recently added to the source data. These could however be integrated into the representation by aggregation into a revised Thiessen polygon geography, or, more appropriately, visualised as either points, lines or polygons, depending on the nature of the recorded feature.

Alternative cartography on (a) Line scaling. (b) Line colour

When a user visits, search is enabled by input of a full postcode, and the initial screen shown incorporates a more limited map view, highlighting a mile radius around the searched postcode (Figure 4). In addition to the map, those crime points falling within a one-mile radius of the postcode are aggregated for a rolling six-month interval, and tabulated according to absolute crimes by type per month. A trend rate is calculated by comparing the first and latter three months to give an indication of change over the course of the six-month period. Pink to green colours are also used to indicate the directionality and intensity of the percentage change. The limited map view enables the display of different crime types and durations. Furthermore, when the ‘slippy map’ is moved, the change of focus is detected and the user is asked whether they wish to refresh the table of results. In addition to the crime data, a call is sent to the API13 13 See
with a request for the contact details of the neighbourhood policing team who are responsible for the searched area with the purpose of stimulating greater public engagement. This feature could be expanded in the future to incorporate an emailing system that might extract some statistics from the website, enabling stakeholders to send these to the neighbourhood policing team alongside further commentary related to the observed patterns, thus providing a community-based contribution to Problem Oriented Policing (Goldstein 1979 1979 ), where policing attention is encouraged to focus on underlying causes of events rather than the servicing of individual callouts viewed in isolation.

Search results showing the limited map view alongside the change analysis table

Watch the video: How to integrate OpenLayers and ReactJs Material UI